So from the below I want to fetch the 6e4a01192927part in my bash script
$ hg log -l1
changeset: 1775:6e4a01192927
branch: xxx
tag: tip
parent: 1772:7892c965215d
parent: 1774:5a9a3e060869
user: Firstname Lastname <someone#something.xyz>
date: Thu Jan 25 09:55:35 2018 +0000
summary: Merged in fix/something (pull request #85)
I am on Mac, El Capitan so it seems I am very limited in the ways I can grep it..
For example grep -oP isn't supported..
I have gotten this far but then hit a brick wall..
$ hg log -l1 | sed -n 1p # fetching first line only
changeset: 1775:6e4a01192927
If you are absolutely sure of the search string use Awk to match the first field and print the field next to it
hg log -l1 | awk -F: '$1=="changeset"{print $NF}'
Here $1 and $NF represent the first and the last fields split by the de-limiter :
Also a bash trick do it would be to read the first line from the command and use paramter expansion syntax
read -r firstLine < <(hg log -l1)
echo "${firstLine##*:}"
Mercurial has many ways to give you just the id of the most recent (or any) commit.
$ hg id -i -r .
68e111c5bd42
or using log:
$ hg log -l 1 --template '{node|short}'
68e111c5bd42
fewer processes spawned and more portable.
Related
I feel like there's probably good ways to do this in bash, but I'm struggling to find direct explanations for the best tools to do something like the following:
Given an input of string data from git log
filter commits down to only those between a set of tags
then format each commit, pulling convention based snippets of data
output the formatted snippets
So far, I've found that:
set $(git tag -l | sort -V | tail -2)
currentVersion=$2
previousVersion=$1
will give me variables for relevant tags. I can then do this:
$(git log v9.5.3..) | {what now?}
to pipe all commits from the previous tag to current. But I'm not sure on the next step?
Will the commits coming from the pipe be considered an array?
If not, how do I differentiate each commit distinctly?
Should I run a function against the piped input data?
Am I thinking about this completely wrong?
If this were Javascript, I'd run a loop over what would assuredly be an array input, regex the snippets I want from the commit, then output a formatted string with the snippets, probably in a map method or something. But I'm not sure if this is how I should be thinking in Bash with pipes?
Expecting data for each commit like:
commit xxxxxxxxxx
Author: xxxx xxxx <xxx#xxx.xxx>
Date: Thu Jul 29 xx:xx:xx 2021 +0000
Subject of the commit
A multiline description of the commit, the contents of which are not
really relevant for what I need, but still useful for consideration.
{an issue id}
And right now I'd be looking to grab:
the commit hash
the author
the date
the subject
the issue id
Would appreciate any insight as to the normal way to do this sort of thing with bash, with pipes, etc. I'd love to get my head right with Bash and do it here, rather than retreat back to my comfort zone of JS. Thanks!
Alright, I spent some time and found a solution that works for me. I'm (again) very much not a bash script'er, so I'm still curious if there's better ways to do this, but this works:
PREVIOUS_VERSION=${1:-$(git tag | tail -n 2 | head -n 1)}
CURRENT_VERSION=$(git tag | tail -n 1)
URL="https://your-hosting-domain-here/q/"
echo "-----RELEASE: $CURRENT_VERSION-----"
echo ""
parse_commits() {
while read LINE
do
if grep -q "Author:" <<< "$LINE"; then
echo "$LINE"
read DATE_LINE; echo "$DATE_LINE"
read SUBJECT_LINE; echo "Subject: $SUBJECT_LINE"
fi
if grep -q "Change-Id:" <<< "$LINE"; then
CHANGE_ID=$(echo "$LINE" | awk '{print $NF}')
echo "$URL$CHANGE_ID"
echo ""
fi
done
}
git log $PREVIOUS_VERSION.. | strings | parse_commits
I'll explain for anyone curious as I was as to how this could be done:
PREVIOUS_VERSION=${1:-$(git tag | tail -n 2 | head -n 1)}
This is simply a means within Bash to assign a variable to the incoming argument, with a fallback if it's not defined.
git log $PREVIOUS_VERSION.. | strings | parse_commits
This uses a git method to output all commits since the given version. We then pipe those commits to Bash "strings" command, which translates the input stream(?)/string(?) into a set of lines, and then pipe that to our custom function.
while read LINE
This starts a while loop, using the Bash command "read" which is super useful for what I needed to do. Essentially, it reads one line from input, and assigns it to the given arg as a variable. So this reads a line, and assigns it to variable: LINE.
if grep -q "Author:" <<< "$LINE"; then
This is a conditional that uses Bash command grep, which will search a file for the given string. We don't have a file, we have a string as a variable $LINE, but we can turn that into a temporary file using the Bash operator <<< which does exactly that. So this line runs the internal block if the given LINE as a file contains the substring "Author".
read DATE_LINE; echo "$DATE_LINE"
Once we've found our desired position after the Author: line (and echo'ed it), we simply read the next line, assign it to variable DATE_LINE and immediately echo that as well. We do the same for the subject line.
Up until now, we probably could have used simpler commands to achieve a similar result, but now we get to the tricky part (for me at least, not knowing much about Bash).
CHANGE_ID=$(echo "$LINE" | awk '{print $NF}')
After a similar conditional grep'ing for a substring Change-Id:, we snag the second word in the LINE by echo'ing it, and piping that to Bash awk command, which was the best way I could find for grabbing a substring. The awk command has a special keyword NF that equates to the count of words in the string. By using $NF we are referencing the last word, since for example the last word of 5 words would be $5. We set that to a variable, then echo it out with a given format (a url in my case).
The ultimate output looks like this:
-----RELEASE: v9.6.0-----
Author: xxxxxx xxxx <xxxx#xxxx.com>
Date: Fri Jul 30 xx:xx:xx 2021 +0000
Subject: The latest commit subject since last tag
https://your-hosting-domain-here/q/xxxxxxxxxxxxxxx
Author: xxxxxx xxxx <xxxx#xxxx.com>
Date: Thu Jul 29 xx:xx:xx 2021 +0000
Subject: The second latest commit subject
https://your-hosting-domain-here/q/xxxxxxxxxxxxxxx
... (and so on)
Hope that was helpful to someone, and if not, to future me :)
I have a repository with a bunch of C files. Given the SHA hashes of two commits,
<commit-sha-1> and <commit-sha-2>,
I'd like to write a script (probably bash/ruby/python) that detects which functions in the C files in the repository have changed across these two commits.
I'm currently looking at the documentation for git log, git commit and git diff. If anyone has done something similar before, could you give me some pointers about where to start or how to proceed.
That doesn't look too good but you could combine git with your
favorite tagging system such as GNU global to achieve that. For
example:
#!/usr/bin/env sh
global -f main.c | awk '{print $NF}' | cut -d '(' -f1 | while read i
do
if [ $(git log -L:"$i":main.c HEAD^..HEAD | wc -l) -gt 0 ]
then
printf "%s() changed\n" "$i"
else
printf "%s() did not change\n" "$i"
fi
done
First, you need to create a database of functions in your project:
$ gtags .
Then run the above script to find functions in main.c that were
modified since the last commit. The script could of course be more
flexible, for example it could handle all *.c files changed between 2 commits as reported by git diff --stats.
Inside the script we use -L option of git log:
-L <start>,<end>:<file>, -L :<funcname>:<file>
Trace the evolution of the line range given by
"<start>,<end>" (or the function name regex <funcname>)
within the <file>. You may not give any pathspec
limiters. This is currently limited to a walk starting from
a single revision, i.e., you may only give zero or one
positive revision arguments. You can specify this option
more than once.
See this question.
Bash script:
#!/usr/bin/env bash
git diff | \
grep -E '^(##)' | \
grep '(' | \
sed 's/##.*##//' | \
sed 's/(.*//' | \
sed 's/\*//' | \
awk '{print $NF}' | \
uniq
Explanation:
1: Get diff
2: Get only lines with hunk headers; if the 'optional section heading' of a hunk header exists, it will be the function definition of a modified function
3: Pick only hunk headers containing open parentheses, as they will contain function definitions
4: Get rid of '## [old-file-range] [new-file-range] ##' sections in the lines
5: Get rid of everything after opening parentheses
6: Get rid of '*' from pointers
7: [See 'awk']: Print the last field (i.e: column) of the records (i.e: lines).
8: Get rid of duplicate names.
I'm trying to put together a bash/sh script that gets the UTC time of the last commit from a svn repo (the other VCSs are easy)
I understand that i can just do svn propget --revprop -rHEAD svn:date and get it rather easily, but there is no guarantee that the svn checkout will be online, so I'd prefer an offline version, if possible.
Maybe something to do with getting the UTC time from svn info? (by screwing with the timezones)
Summary: How can i get the UTC time of a svn commit, while not having access to the server?
Thanks
You can use svn log -r COMMITTED and extract date info from it. This is valid for offline copies.
svn log -r COMMITTED | sed -nE 's/^r.*\| ([0-9]{4}-[0-9]{2}-[0-9]{2} \S+ \S+).*/\1/p' | xargs -i -- date -ud '{}' '+%s'
The -u option makes date show UTC time instead.
Actually we don't need to use xargs:
date -ud "$(exec svn log -r COMMITTED | sed -nE 's/^r.*\| ([0-9]{4}-[0-9]{2}-[0-9]{2} \S+ \S+).*/\1/p')" '+%s'
UPDATE: I got the wrong command. The command above won't work offline. Here's the right one:
date -ud "$(svn info | sed -nE 's/^Last Changed Date: (\S+ \S+ \S+).*/\1/p')" '+%s'
I'm silly. As soon as i actually realised i just need to convert one timezone to another, it was obvious:
date -u +[format] -d $(svn info | <some grepping and cutting here>)
In my case, this is:
date -u +"%Y%m%d-%H%M" -d "$(svn info | grep 'Date' | cut -d' ' -f4-6)"
Of course, my solution probably isn't optimal, and if someone knows a better way, that'd be much appreciated :)
It turns out that the xml output of "svn info" has a zulu timestamp:
$ svn info --xml | grep date
<date>2015-04-30T15:38:49.371762Z</date>
So your bash command might be:
svn info --xml | grep -oP '(?<=<date>).*?(?=</date>)'
I just stumbled on this post. Ended up figuring out that svn uses env variable TZ, so for example:
TZ= svn log
will log dates in UTC
I want to write a shell script in bash to deploy websites from an svn repository. When I deploy a website, I name the exported directory website_name-Rrevision_number. I'd like the bash script to automatically rename the exported directory, so it needs to learn the current revision number from the export directory. If I run
$> svn info http://svn-repository/trunk
Path: trunk
URL: http://svn-repository/mystery/trunk
Repository Root: http://svn-repository/mystery
Repository UUID: b809e6ab-5153-0410-a985-ac99030dffe6
Revision: 624
Node Kind: directory
Last Changed Author: author
Last Changed Rev: 624
Last Changed Date: 2010-02-19 15:48:16 -0500 (Fri, 19 Feb 2010)
The number after the string Revision: is what I want. How do I get that into a bash variable? Do I do string parsing of the output from the svn info command?
Use svnversion. This will output the revision number/range with minimal additional cruft
REVISION=`svn info http://svn-repository/trunk |grep '^Revision:' | sed -e 's/^Revision: //'`
It's simple, if inelegant.
Parsing the 'Revision' string is not portable across different locales.
Eg. with my locale it is like:
...
Wersja: 6583
Rodzaj obiektu: katalog
Zlecenie: normalne
Autor ostatniej zmiany: ...
Ostatnio zmieniona wersja: 6583
Data ostatniej zmiany: 2013-03-21 11:33:44 +0100 (czw)
...
You don't wanna parse that :)
So, the best approach is to use 'svnversion' as oefe suggested. This is the tool mentioned for this purpose.
just use one awk command. much simpler as well.
var=$(svn info http://svn-repository/trunk | awk '/^Revision:/{print $2}')
Without using sed, grep or awk:
REVISION=`svn info --show-item=revision --no-newline`
svn info http://svn-repository/trunk | grep Revision | tr -d 'Revison: '
Spits out the revision
Use backticks in your shell script to execute this and assign the results to a variable:
REVISION=`svn info http://svn-repository/trunk | grep Revision | tr -d 'Revison: '`
There are probably a dozen different ways to do this, but I'd go with something simple like:
revision="$(svn info http://svn-repository/trunk | grep "^Revision:" | cut -c 11-)"
This will give you the head revision number
svn info -r 'HEAD' | grep Revision | egrep -o "[0-9]+"
egrep is extended grep.
REVISION=$(svn info http://svn-repository/trunk |grep '^Revision:' | sed -e 's/^Revision: //p')
echo $REVISION
I want to write a script to find the latest version of rpm of a given package available from a mirror for eg: http://mirror.centos.org/centos/5/updates/x86_64/RPMS/
The script should be able to run on majority of linux flavors (eg centos, redhat, ubuntu). So yum based solution is not an option. Is there any existing script that does this? Or can someone give me a general idea on how to go about this?
Thx to levislevis85 for the wget cli. Try this:
ARCH="i386"
PKG="pidgin-devel"
URL=http://mirror.centos.org/centos/5/updates/x86_64/RPMS
DL=`wget -O- -q $URL | sed -n 's/.*rpm.>\('$PKG'.*'$ARCH'.rpm\).*/\1/p' | sort | tail -1`
wget $URL/$DL
I Will put my comment here, otherwise the code will not be readable.
Try this:
ARCH="i386"
PKG="pidgin-devel"
URL=http://mirror.centos.org/centos/5/updates/x86_64/RPMS
DL=`wget -O- -q $URL | sed -n 's/.*rpm.>\('$PKG'.*'$ARCH'.rpm\).*<td align="right">\(.*\)-\(.*\)-\(.*\) \(..\):\(..\) <\/td><td.*/\4 \3 \2 \5 \6 \1/p' | sort -k1n -k2M -k3n -k4n -k5n | cut -d ' ' -f 6 | tail -1`
wget $URL/$DL
What it does is:
wget - get the index file
sed - cut out some parts and put it together in different order. Should result in Year Month Day Hour Minute and Package, like:
2009 Oct 27 01 14 pidgin-devel-2.6.2-2.el5.i386.rpm
2009 Oct 30 10 49 pidgin-devel-2.6.3-2.el5.i386.rpm
sort - order the columns n stays for numerical and M for month
cut - cut out the filed 6
tail - show only last entry
the problem with this could be, if some older package release comes after a newer then this script will also fail. If the output of the site changes, the script will fail. There are always a lot of points where a script could fail.
using wget and gawk
#!/bin/bash
pkg="kernel-headers"
wget -O- -q http://mirror.centos.org/centos/5/updates/x86_64/RPMS | awk -vpkg="$pkg" 'BEGIN{
RS="\n";FS="</a>"
z=split("Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec",D,"|")
for(i=1;i<=z;i++){
date[D[i]]=sprintf("%02d",i)
}
temp=0
}
$1~pkg{
p=$1
t=$2
gsub(/.*href=\042/,"",p)
gsub(/\042>.*/,"",p)
m=split(t,timestamp," ")
n=split(timestamp[1],d,"-")
q=split(timestamp[2],hm,":")
datetime=d[3]date[d[2]]d[1]hm[1]hm[2]
if ( datetime >= temp ){
temp=datetime
filepkg = p
}
}
END{
print "Latest package: "filepkg", date: ",temp
}'
an example run of the above:
linux$ ./findlatest.sh
Latest package: kernel-headers-2.6.18-164.6.1.el5.x86_64.rpm, date: 200911041457
Try this (which requires lynx):
lynx -dump -listonly -nonumbers http://mirror.centos.org/centos/5/updates/x86_64/RPMS/ |
grep -E '^.*xen-libs.*i386.rpm$' |
sort --version-sort |
tail -n 1
If your sort doesn't have --version-sort, then you'll have to parse the version out of the filename or hope that a regular sort will do the right thing.
You may be able to do something similar with wget or curl or even a Bash script using redirections with /dev/tcp/HOST/PORT. The problem with these is that you would then have to parse HTML.