More efficient way to parse git commands? - bash

In-short: would like prompt to appear faster, although it's not sluggish.
Making a custom prompt for my bash terminal; the following list is in my /etc/bash.bashrc
I already use the "gitstatus" repo, which speeds up certain git commands. I think slowdowns come from the number of commands themselves. I want to know if I can generally use LESS git commands to do the same thing.
Here is a list of everything I do:
Obtain branch (if head detached, commands requiring it skipped)
Check for upstream
git rev-list --left-right --count "$branch"..."$upstream" to check if ahead or behind
Check for stashes
EDIT: Disregard #5. I called command #8 first, obtained this information, and appended #5 to PS1 before command #8
Check for dirty branch (done separately; I know #8 provides this info, but this command is called earlier on, and I like the symbol there)
Check for remote
Check for untracked files (separately than the bullet below, as they are located early in the prompt as I treat them as a higher priority "problem")
All at once check for modified, added, removes, or unmerged files by parsing git status -s
These are run using one git command per line. Will provide an image if needed as well.
On Bash for Windows terminal.

The answer to my own extremely specific question:
In my case, I'm trying to parse git status -s in function foo, and call function bar which detects if untracked files exist. The thing is, bar's output is appended to PS1 before foo's. It seems fine, but I'm trying to minimize the amount of git commands called every time in my bashrc. So, insteading of parsing git status -s in foo and then separately finding if untracked files exist in bar, I can call foo, create a untracked_files_exist variable, and make it true if such is given by git status -s. Then I can call bar after, use this untracked_files_exist how I want, and separately append the functions' outputs to PS1 in whichever order after both are called.
if that doesn't make sense:
If you want a fast prompt, call a parse-able git function that displays as much information as possible. If you want the prompt to contain such information in a different order than the git command outputs, don't append to PS1 INSIDE of the parsing functions. Do so AFTER so you have control over the order of the prompt.

Related

git add - can I force exit code 0 when the only error is the presence of ignored files?

Main question
Pushing all local changes since the last commit and push is an operation very frequently used when working with git, so it may be beneficial to optimize its usage into one command that would have a script like this as its backbone:
git add * && git commit -m "my-message" && git push
But for some reason this failed in the repo I tried to use this in. After some searching, I found out using echo $? that when ignored files are present git add * returns 1. This is usually not a problem as it doesn't effect typing out the next command, but in a script it is critical to be able to automatically decide whether to continue. The obvious workaround to this problem is simply using
git add * || git commit -m "my-message" && git push
but I want to avoid this as it continues execution even if actual errors happen in git add.
Is there a configuration option (or any other solution) that makes git report an exit code of 0 when ignored files are included in the add command but no other error happens?
PS
I know that automating this is not a very wise use of time, that is not the purpose.
This is just the most simplified version of the code that fails, the actual script doesn't use hard coded commit messages.
This is caused by * being expanded by Bash to all the non-hidden files in the current directory (by default) and passed as separate arguments to git add. git complains because some of these explicitly named (as far as git can tell) files are ignored, so the command is ambiguous.
git add . will do basically the same thing, with the only exception of adding dotfiles, and returns exit code 0 even if the current directory includes ignored files.

Need to run a "hook" on new Git Tag (Git hook for newly pulled tags)

To begin, I looked at this question which seems to be the only one regarding this topic:
How do I react to new tags in git hooks?
But I do not understand what hook that is or how it is being used. I simply want to run a little script that will update if I git pull and new tag are received.
I tried putting it in: .git/hooks/update, .git/hooks/post-receive
#!/bin/bash
exec < /dev/tty
CURRENT_TAG=$(git tag --contains)
echo Test Test
echo "LATEST_TAG: \"${CURRENT_TAG}"\" > "config/latest_tag.yml"
I would like to use Git hooks if possible. I was thinking of doing alias "git pull"="git pull && ./update_script.sh but I cannot alias a spaced word, n'or can I alias something and enforce the rest of the team to remember it.
As the documentation says, the post-receive and update hooks are "server" side hooks, i.e. they run on the server in reaction to a push from a client. What you want is the opposite, for which there unfortunately is no hook.
Since you mention that aliasing the command wouldn't work, you could use a function as the next best thing. It will receive the arguments that can then be examined.
git() { env git $* && [ "$1" = "fetch" -o "$1" = "pull" ] && ./update_script.sh; }
Care must be taken when shadowing commands like this to not cause infinite recursion. Within the function body, you must never call git directly, as that would re-run the function, not the git command. I've used env to resolve the actual git binary, but using an absolute path would work just as well.
Note that it is actually git fetch that will get the new tags and git pull simply calls git fetch internally. I therefore included handling for both fetch and pull. Also note that it will shadow the git command in all repositories, so it would need to be extended if the special handling should only be applied to specific repositories.
In case your update_script.sh is tracked within the repository itself there are at least two things to be aware of:
Anyone who can push changes to that file will essentially be enabled to run arbitrary commands on any machine where pulls happen.
As the pull or fetch commands may be run anywhere in the work tree, not just at the top level, the path should first be resolved using [env] git rev-parse --show-toplevel.

Output from git log gets lost when piped to file - what am I missing?

I am trying to get some information about some git commits via the command line as part of a larger automated tool I am building. The information I want is available via this git log command:
git log --branches --graph --oneline --parents
which produces this output:
This is great, because this has the hashes and tags that I want, as well as the commit messages. However, when I pipe this to a file, the stuff in the brackets seems to somehow get lost. I'm not too interested in the colour, but I do want just the plain text as I would expect from any *nix-like program.
This is the output I seem to get instead, which omits some of the output I want (eg, the tag information):
I'm not sure how or why this information gets lost when being piped somewhere. I feel like this might be something incredibly simple and obvious.
I experience the same problem whether I do this in Bash on Arch Linux (with the latest version of git) or in the MINGW64 Bash environment in Windows.
Question: How can I completely capture git log's output without losing the information that is being lost when piping to a file?
You need to add the --decorate option to your log command. Set it either as --decorate=short or --decorate=full.
It appears in your config you've probably got log.decorate set to auto, which means that tags and such are displayed (in short form) when writing to the terminal, but not to a pipe or other file.
Similarly there are config values and command options that govern if (and when) color codes are output; so
git log --branches --graph --oneline --parents --decorate=short --color=always
would output the tags and colors even when redirected to a file.
Note that when scripting you should probably include these options on the command line rather than make assumptions about what config values are set. Depending on what you do with the output, log may or may not be the best command to use in scripting anyway, as git commands are somewhat divided into those meant for human consumption vs those mean for scripting.
Your git command:
git log --branches --graph --oneline --parents
does not produce the same output for me that you show in your first example. It does, however, produce output similar to the second example. The ref names in the brackets (branches and tags) are included when you add the --decorate option. (Here's the documentation.)
Note that your git log format can be controlled in your ~/.gitconfig file. For example, the format you've shown in your question looks like it might be achieved with options like this:
git log --decorate --graph --all --format='%C(auto,yellow)%h%C(auto,reset) %C(auto,yellow)%p%C(auto,reset) %C(auto,red)%d%C(auto,reset) %s %C(auto,green)%an%C(auto,reset) (%C(auto,cyan)%ar%C(auto,reset))'
If you are trying to automate things, you can specify a format that is better tuned to your requirements. Check the git documentation for pretty-formats for details. In particular, look for the %C format, and the %C(auto,…​) notation, which causes colour controls to be used only if the command is run from a terminal.
If you always want to generate the colour information regardless of whether you're using git log interactively, you can remove each ocurrence of auto, in the above line (or alias).

How to convert this script into a custom mercurial command?

I have the following script:
#!/bin/bash
if [ $# -ne 2 ]; then
echo -n "$0 - a utility for applying uncommitted changes to a "
echo "remote hg repository locally also"
echo "Usage: $0 user#hostname path/to/repository"
exit -1
fi
user_at_hostname="$1"
remote_path="$2"
ssh "$user_at_hostname" hg -R "$remote_path" diff | hg import --no-commit -
It's not the most glorious piece of code, and I would rather do something more "mercurial" than that, so to speak. Specifically, I was wondering whether I could achieve the same using a mercurial alias / custom command. Can I?
PS - I had also thought about maybe issuing some sort of shelve command on the remote repository instead of just getting a diff, but I don't want to make thing too complicated.
If you just want to convert this script into an hg foo command without changing it, use a shell alias. Just copy the last line and replace the custom variables with $1 and $2.
If you want to make this look more like a "normal" Mercurial workflow, you could start by committing the changes and then pulling them. I imagine that you are avoiding this workflow so that you can change your mind about these modifications without polluting your repository's history with "oops" commits. If that is the case, then you will likely be interested in the Evolve extension. The Evolve extension is intended to provide a safe and reasonably well-behaved system for sharing mutable history. In this way, you can commit a change; share it with another repository; amend, rebase, squash, or otherwise modify the change; and then share the modified changeset with the same repository or a different one. You can also prune changesets from history and share the fact that you pruned them. If this sharing causes problems, such as amending a commit which has descendants, Mercurial will detect those problems and offer a fix (e.g. rebase the descendants onto the new version of the commit) which you can execute automatically with hg evolve. While the extension is still experimental, it does basically work for most simple use cases.
If experimental software isn't of interest to you, you can still flag the repository as non-publishing. This will allow you to use more traditional history-editing machinery such as hg rebase, hg histedit, and hg strip even after you have pushed to the repository. However, revisions which are destroyed in one repository will not automatically vanish from other repositories without the evolve extension. You will have to strip them by hand.
Finally, note that hg push --force does not destroy revisions. It creates new anonymous branches, typically resulting in a messy history, but without actually losing any data. It is different from git in this fashion.

Get last bash command including pipes

I wrote a script that's retrieving the currently run command using $BASH_COMMAND. The script is basically doing some logic to figure out current command and file being opened for each tmux session. Everything works great, except when user runs a piped command (i.e. cat file | less), in which case $BASH_COMMAND only seems to store the first command before the pipe. As a result, instead of showing the command as less[file] (which is the actual program that has the file open), the script outputs it as cat[file].
One alternative I tried using is relying on history 1 instead of $BASH_COMMAND. There are a couple issues with this alternative as well. First, it does not auto-expand aliases, like $BASH_COMMAND does, which in some cases could cause the script to get confused (for example, if I tell it to ignore ls, but use ll instead (mapped to ls -l), the script will not ignore the command, processing it anyway), and including extra conditionals for each alias doesn't seem like a clean solution. The second problem is that I'm using HISTIGNORE to filter out some common commands, which I still want the script to be aware of, using history will just make the script ignore the last command unless it's tracked by history.
I also tried using ${#PIPESTATUS[#]} to see if the array length is 1 (no pipes) or higher (pipes used, in which case I would retrieve the history instead), but it seems to always only be aware of 1 command as well.
Is anyone aware of other alternatives that could work for me (such as another variable that would store $BASH_COMMAND for the other subcalls that are to be executed after the current subcall is complete, or some way to be aware if the pipe was used in the last command)?
i think that you will need to change a bit your implementation and use "history" command to get it to work. Also, use the command "alias" to check all of the configured alias.. the command "which" to check if the command is actually stored in any PATH dir. good luck

Resources