Get latest version of Maven with curl - bash

My task is to grab latest version of maven in bash script using curl https://apache.osuosl.org/maven/maven-3/
Output should be: 3.8.5
curl -s "https://apache.osuosl.org/maven/maven-3/" | grep -o "[0-9].[0-9].[0-9]" | tail -1
works for me ty

One way is to
Use grep to only get the lines containing the folder link
Use sed with an regex to get only the version number
Use sort to sort the lines
Use tail -n1 to get the last line
curl -s https://apache.osuosl.org/maven/maven-3/ \
| grep folder \
| gsed -E 's/.*([[:digit:]]\.[[:digit:]]\.[[:digit:]]).*/\1/' \
| sort \
| tail -n1
Output:
3.8.5

The url you're mentioning is an HTML-document and curl is not an HTML-parser, so I'd look into an HTML-parser, like xidel, if I were you:
$ xidel -s "https://apache.osuosl.org/maven/maven-3/" -e '//pre/a[last()]'
3.8.5/
$ xidel -s "https://apache.osuosl.org/maven/maven-3/" -e 'substring-before(//pre/a[last()],"/")'
3.8.5

I would suggest to use the central repository instead one of the apache maven sites because the information on central more maded for machine consumption.
It will be taken into account that the file maven-metadata.xml contains the appropriate information.
There are two options using the <version>..</version> tag and the <latest>..</latest> tag.
MVNVERSION=$(curl https://repo1.maven.org/maven2/org/apache/maven/maven/maven-metadata.xml \
| grep "<version>.*</version>" \
| tr -s " " \
| cut -d ">" -f2 \
| cut -d "<" -f1 \
| sort -V -r \
| head -1)
The second option:
MVNVERSION=$(curl https://repo1.maven.org/maven2/org/apache/maven/maven/maven-metadata.xml \
| grep "<latest>.*</latest>" \
| cut -d ">" -f2 \
| cut -d "<" -f1)

Related

Sending grep data iteratively using curl for output of tail -f

I am using grep to extract data from a log file. The log file is dynamically updating new rows. and I need to send the grepped data to a REST endpoint using curl. This can be done easily for a static file but cannot find a solution fo a running log file. How can I realize this situation?
eg: tail -f | grep "<string>" > ~/<fileName>.log
The above can put the data in a file. Need to send it using a POST curl.
Maybe using a function like
send_data(){
curl -s -k -X POST –header Content-Type: application/json’ \
–header ‘Accept: application/json’ \
“http://${HOST}${PORT}/v1/notify” \
-d $1
}
If tail -f | grep "<string>" > ~/<fileName>.log is working for you then you could do:
tail -f file | stdbuf -i0 -o0 -e0 grep "<string>" | xargs -n 1 -d $'\n' curl ...
or:
while IFS= read -r line; do
curl ... "$line"
done < <(tail -f file | stdbuf -i0 -o0 -e0 grep "<string>")

Is it possible to comment inline in a multi-line newline escaped script?

When I have long multi-line piped commands in my scripts I would like to comment on what each line does, but I haven't found a way of doing so.
Given this snippet:
git branch -r --merged \
| grep " $remote" \
| egrep -v "HEAD ->" \
| util.esed -n 's/ \w*\/(.*)/\1/p' \
| egrep -v \
"$(skipped $skip | util.esed -e 's/,/|/g' -e 's/(\w+)/^\1$/g' )" \
| paste -s
Is it possible to insert comments in between the lines? It seems that using the backslash to escape the newline prevents me from adding comments at the end of the line, and I can't add the comment before the backslash, as that would hide the escaping.
Pseudo-code of what I would like the above script to look like
It seems I was unclear (?) of what I wanted by the above section, so to have a clue on what I am looking for, it should be in the similar vein of this:
git branch -r --merged \ # list merged remote branches
| grep " $remote" \ # filter out the ones for $remote
| egrep -v "HEAD ->" \ # remove some garbage
#strip some whitespace:
| util.esed -n 's/ \w*\/(.*)/\1/p' \
# remove the skipped branches:
| egrep -v \
"$(skipped $skip | util.esed -e 's/,/|/g' -e 's/(\w+)/^\1$/g' )" \
| paste -s # something else
It doesn't have to be exactly like this (obviously, it's not valid syntax), but something similar. If it's not possible directly, due to syntactical restrictions, perhaps it's possible to write self-modifying code that will have comments that are removed before executing it?
You can try something like that:
git branch --remote | # some comment
grep origin | # another comment
tr a-z A-Z

Get golang version from compiled binary

Is there a way to get golang version from pkg/ or from compiled binary?
I want to automate removal of $GOPATH/pkg folder only when I change the golang version.
Nevermind, found the answer
[ `strings $pkg_a_file | grep 'go object' | head -n 1 | cut -f 5 -d ' '` != `go version | cut -f 3 -d ' '` ] && \
rm -rf $GOPATH/pkg
strings $pkg_a_file | grep 'go object' | head -n 1 | cut -f 5 -d ' ' part will show something like go1.6.2
pkg_a_file can be something like this:
PKG_OS_ARCH=`go version | cut -d ' ' -f 4 | tr '/' '_'`
pkg_a_file=$GOPATH/pkg/$PKG_OS_ARCH/gitlab.com/kokizzu/gokil/A.a
External reference: http://kokizzu.blogspot.co.id/2016/06/solution-for-golang-slow-compile.html
With 1.9.2, the grep command is
strings $pkg_a_file | grep 'Go cmd/compile'
I found the following method, to check the version of Go lib used to compile the binary
$ strings compiled_binary | grep '^go1'
go1.17.4

curl complex usage with pattern

I'm trying to get 2 files using curl based on some pattern but that doesn't seem to work:
Files:
SystemOut_15.04.01_21.12.36.log
SystemOut_15.04.01_15.54.05.log
curl -f -k -u "login:password" https://myserver/cgi-bin/logviewer/index.cgi?getlogfile=SystemOut_15.04.01_21.12.36.log'&'server=qwerty123.com'&'numlines=100000000'&'appenv=MBL%20-%20PROD'&'directory=/app/WAS/was85/profiles/node/logs/mbl-server1
I know there is -A key but it doesn't work since my file is inside the link.
How can I extract those 2 files using a pattern?
Did that myself. One curl gets the list of logs on the webpage. Another downloads those files.
The code looks like:
for file in $(curl -f -k -u "user:pwd" https://selfservice.pwj.com/cgi-bin/logviewer/index.cgi?listdirectory=/app/smx_client_mob/data/log'&'appenv=MBL%20-%20PROD'&'server=xshembl04pap.she.pwj.com | grep href | sed 's/.*href="//' | sed 's/".*//' | sed 's/javascript:getLog//g' | sed "s/['();]//g" | grep -i 'service' | grep '^[a-zA-Z].*'); do
curl -o $file -f -k -u "user:pwd" https://selfservice.pwj.com/cgi-bin/logviewer/index.cgi?getlogfile="$file"'&'server=xshembl04pap.she.pwj.com'&'numlines=100000000'&'appenv=MBL%20-%20PROD'&'directory=/app/smx_client_mob/data/log; done

Wget page title

Is it possible to Wget a page's title from the command line?
input:
$ wget http://bit.ly/rQyhG5 <<code>>
output:
If it’s broke, fix it right - Keeping it Real Estate. Home
This script would give you what you need:
wget --quiet -O - http://bit.ly/rQyhG5 \
| sed -n -e 's!.*<title>\(.*\)</title>.*!\1!p'
But there are lots of situations where it breaks, including if there is a <title>...</title> in the body of the page, or if the title is on more than one line.
This might be a little better:
wget --quiet -O - http://bit.ly/rQyhG5 \
| paste -s -d " " \
| sed -e 's!.*<head>\(.*\)</head>.*!\1!' \
| sed -e 's!.*<title>\(.*\)</title>.*!\1!'
but it does not fit your case as your page contains the following head opening:
<head profile="http://gmpg.org/xfn/11">
Again, this might be better:
wget --quiet -O - http://bit.ly/rQyhG5 \
| paste -s -d " " \
| sed -e 's!.*<head[^>]*>\(.*\)</head>.*!\1!' \
| sed -e 's!.*<title>\(.*\)</title>.*!\1!'
but there is still ways to break it, including no head/title in the page.
Again, a better solution might be:
wget --quiet -O - http://bit.ly/rQyhG5 \
| paste -s -d " " \
| sed -n -e 's!.*<head[^>]*>\(.*\)</head>.*!\1!p' \
| sed -n -e 's!.*<title>\(.*\)</title>.*!\1!p'
but I am sure we can find a way to break it. This is why a true xml parser is the right solution, but as your question is tagged shell, the above it the best I can come with.
The paste and the 2 sed can be merged in a single sed, but is less readable. However, this version has the advantage of working on multi-line titles:
wget --quiet -O - http://bit.ly/rQyhG5 \
| sed -n -e 'H;${x;s!.*<head[^>]*>\(.*\)</head>.*!\1!;T;s!.*<title>\(.*\)</title>.*!\1!p}'
Update:
As explain in the comments, the last sed above uses the T command which is a GNU extension. If you do not have a compatible version, you can use:
wget --quiet -O - http://bit.ly/rQyhG5 \
| sed -n -e 'H;${x;s!.*<head[^>]*>\(.*\)</head>.*!\1!;tnext;b;:next;s!.*<title>\(.*\)</title>.*!\1!p}'
Update 2:
As above still not working on Mac, try:
wget --quiet -O - http://bit.ly/rQyhG5 \
| sed -n -e 'H;${x;s!.*<head[^>]*>\(.*\)</head>.*!\1!;tnext};b;:next;s!.*<title>\(.*\)</title>.*!\1!p'
and/or
cat << EOF > script
H
\$x
\$s!.*<head[^>]*>\(.*\)</head>.*!\1!
\$tnext
b
:next
s!.*<title>\(.*\)</title>.*!\1!p
EOF
wget --quiet -O - http://bit.ly/rQyhG5 \
| sed -n -f script
(Note the \ before the $ to avoid variable expansion.)
It seams that the :next does not like to be prefixed by a $, which could be a problem in some sed version.
The following will pull whatever lynx thinks the title of the page is, saving you from all of the regex nonsense. Assuming the page you are retrieving is standards compliant enough for lynx, this should not break.
lynx -dump example.com | sed '2q;d'

Resources