I would like to implement a version checking in a bash script, which would verify a file on my website, and then if the version of the script does not match the last version, it opens a web page. I heard I would use cURL, but didnt find any tuto about my case.
I'm just gonna write what I would do:
#configuration
VERSION=1.0.0
URL='http://example.com/version'
DL_URL='http://example.com/download'
#later on whenever you want
# Assume the url only displays the current version number
CHECK_VERSION=$(wget -O- "$URL")
# If you remove the periods, it works as a way to
# lexicorigraphically compare the version numbers as numbers
CURRENT_NUMBER=$(echo "$VERSION" | tr -d '.')
NEW_NUMBER=$(echo "$CHECK_VERSION" | tr -d '.')
test "$CURRENT_NUMBER" -gt "$NEW_NUMBER" || x-www-browser "$DL_URL"
Related
I have written a shell script to download and install the tomcat server v (8.5.31). wget http://www.us.apache.org/dist/tomcat/tomcat-8/v8.5.31/bin/apache-tomcat-8.5.31.tar.gz It was working fine, but as soon the version got changed to 9.0.10, it started giving error as 404 not found.
So what should I do to get the latest version always.
TL;DR
TOMCAT_VER=`curl --silent http://mirror.vorboss.net/apache/tomcat/tomcat-8/ | grep v8 | awk '{split($5,c,">v") ; split(c[2],d,"/") ; print d[1]}'`
wget -N http://mirror.vorboss.net/apache/tomcat/tomcat-8/v${TOMCAT_VER}/bin/apache-tomcat-${TOMCAT_VER}.tar.gz
I encountered the same challenge.
However for my solution I require the latest 8.5.x Tomcat version which keeps changing.
Since the URL to download Tomcat remains the same, with only the version changing, I found the following solution works for me:
TOMCAT_VER=`curl --silent http://mirror.vorboss.net/apache/tomcat/tomcat-8/ | grep v8 | awk '{split($5,c,">v") ; split(c[2],d,"/") ; print d[1]}'`
echo Tomcat version: $TOMCAT_VER
Tomcat version: 8.5.40
grep v8 - returns the line with the desired version:
<img src="/icons/folder.gif" alt="[DIR]"> v8.5.40/ 2019-04-12 13:16 -
awk '{split($5,c,">v") ; split(c[2],d,"/") ; print d[1]}' - Extracts the version we want:
8.5.40
I then proceed to download Tomcat using the extracted version:
wget -N http://mirror.vorboss.net/apache/tomcat/tomcat-8/v${TOMCAT_VER}/bin/apache-tomcat-${TOMCAT_VER}.tar.gz
This is the complete curl response from which the version is extracted using curl, grep and awk:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<html>
<head>
<title>Index of /apache/tomcat/tomcat-8</title>
</head>
<body>
<h1>Index of /apache/tomcat/tomcat-8</h1>
<pre><img src="/icons/blank.gif" alt="Icon "> Name Last modified Size Description<hr><img src="/icons/back.gif" alt="[PARENTDIR]"> Parent Directory -
<img src="/icons/folder.gif" alt="[DIR]"> v8.5.40/ 2019-04-12 13:16 -
<hr></pre>
<address>Apache/2.4.25 (Debian) Server at mirror.vorboss.net Port 80</address>
</body></html>
I've found a way using the official github mirror.
Basically, one has to query the github api for all available tags.
Afterwards, for each tag, the date has to be determined.
Finally, the tag with the latest date is the latest tag!
Try this script - let's call it latest-tag. It's dependent on jq. It takes a short while to execute, but should print the URL of the tarball of the lastest tag (currently: https://api.github.com/repos/apache/tomcat/tarball/TOMCAT_9_0_10)
#!/bin/bash
# Prints the url to the latest tag of given github repo
# $1: repo (e.g.: apache/tomcat )
# $2: optional github credentials. Credentials are needed if running into the api rate limit (e.g.: <user>|<user>:<authkey>)
repo=${1:?Missing parameter: repo (e.g.: apache/tomcat )}
[ -n "$2" ] && credentials="-u $2"
declare -a commits
declare -a tarball_urls
while IFS=, read commit_url tarball_url
do
date=$(curl $credentials --silent "$commit_url" | jq -r ".commit.author.date")
if [[ "$date" > ${latest_date:- } ]]
then
latest_date=$date
latest_tarball_url=$tarball_url
fi
done < <( curl $credentials --silent "https://api.github.com/repos/$repo/tags" | jq -r ".[] | [.commit.url, .tarball_url] | #csv" | tr -d \")
echo $latest_tarball_url
Usage:
./latest-tag apache/tomcat
You might get hindered by the rate limit of the github api.
Therefore, you might want to supply github credentials to the script:
./latest-tag apache/tomcat <username>
This will ask you for your github password. In order to run it interactively, you can supply the script with a personal github api token:
./latest-tag apache/tomcat <username>:<api token>
Disclaimer - this solution uses screen scraping
Find and download latest version of Apache Tomcat 9 for Linux or Windows-x64.
Uses Python 3.7.3
import os
import urllib.request
url_ends_with = ".tar.gz\"" # Use line for Non-Windows
url_ends_with = "windows-x64.zip\"" # Use line for Windows-x64
url_starts_with = "\"http"
dir_to_contain_download = "tmp/"
tomcat_apache_org_frontpage_html = "tomcat.apache.org.frontpage.html"
download_page = "https://tomcat.apache.org/download-90.cgi"
try:
if not os.path.exists(dir_to_contain_download):
os.makedirs(dir_to_contain_download, exist_ok=True)
htmlfile = urllib.request.urlretrieve(download_page, dir_to_contain_download + tomcat_apache_org_frontpage_html)
fp = open(dir_to_contain_download + tomcat_apache_org_frontpage_html)
line = fp.readline()
cnt = 1
while line:
line = fp.readline()
cnt += 1
if url_ends_with in line and url_starts_with in line:
tomcat_url_index = line.find(url_ends_with)
tomcat_url = line[line.find(url_starts_with) + 1 : tomcat_url_index + len(url_ends_with) - 1]
print ("Downloading: " + tomcat_url)
print ("To file: " + dir_to_contain_download + tomcat_url[tomcat_url.rfind("/")+1:])
zipfile = urllib.request.urlretrieve(tomcat_url, dir_to_contain_download + tomcat_url[tomcat_url.rfind("/")+1:])
break
finally:
fp.close()
os.remove(dir_to_contain_download + "/" + tomcat_apache_org_frontpage_html)
As I don't have enough reputation to answer to Jonathan or edit his post, here is my solution (tested with versions 8-10):
#!/bin/bash
wantedVer=9
TOMCAT_VER=`curl --silent http://mirror.vorboss.net/apache/tomcat/tomcat-${wantedVer}/|grep -oP "(?<=\"v)${wantedVer}(?:\.\d+){2}\b"|sort -V|tail -n 1`
wget -N https://mirror.vorboss.net/apache/tomcat/tomcat-${TOMCAT_VER%.*.*}/v${TOMCAT_VER}/bin/apache-tomcat-${TOMCAT_VER}.tar.gz
# Apache download link: wget -N https://dlcdn.apache.org/tomcat/tomcat-${TOMCAT_VER%.*.*}/v${TOMCAT_VER}/bin/apache-tomcat-${TOMCAT_VER}.tar.gz
I had trouble with Jonathans code, because there were different versions downloadable at the same time which broke the composed download-link. In this solution, only the newest one is regarded.
Altering the first line is enough to distinguish between different Main Versions.
Code Explained:
curl grabs an apache-directory-listing for the wanted tomcat-Version.
Grep then extracts all different Versions with a positive lookbehind, using the (-P) Perl regex pattern and keeping only the matching part (-o). Result: line(s) each containing one version Number in no particular order.
These lines get sorted by Version (sort -V) and only the last line (tail -n 1), in which is the greatest of all Versions is located, is assigned to the variable TOMCAT_VER.
At last, the download link is created with the gathered version information and downloaded via wget , but only if it is a newer version than present (-N).
I wrote this code:
TOMCAT_URL=$(curl -sS https://tomcat.apache.org/download-90.cgi | grep \
'>tar.gz</a>' | head -1 | grep -E -o 'https://[a-z0-9:./-]+.tar.gz')
TOMCAT_NAME=$(echo $TOMCAT_URL | grep -E -o 'apache-tomcat-[0-9.]+[0-9]')
It's not the most efficient way possible but it's very easy to understand how it works and it does work. Update the download-XX.cgi link to 10 if you want that version.
Then you can do:
curl -sS $TOMCAT_URL | tar xfz -
ln -s $TOMCAT_NAME apache-tomcat
and you will have the current version of Tomcat at apache-tomcat. When a new version comes out you can use this to do an easy update while keeping the old version there.
i have create small program consisting of a couple of shell scripts that work together, almost finished
and everything seems to work fine, except for one thing of which i'm not really sure how to do..
which i need, to be able to finish this project...
there seem to be many routes that can be taken, but i just can't get there...
i have some curl results with lots of unused data including different links, and between all data there is a bunch of similar links
i only need to get (into a variable) the link of the highest number (without the always same text)
the links are all similar, and have this structure:
always same text
always same text
always same text
i was thinking about something like;
content="$(curl -s "$url/$param")"
linksArray= get from $content all links that are in the href section of the links
that contain "always same text"
declare highestnumber;
for file in $linksArray
do
href=${1##*/}
fullname=${href%.html}
OIFS="$IFS"
IFS='_'
read -a nameparts <<< "${fullname}"
IFS="$OIFS"
if ${nameparts[1]} > $highestnumber;
then
highestnumber=${nameparts[1]}
fi
done
echo ${nameparts[1]}_${highestnumber}.html
result:
https://always/same/link/unique-name_19.html
this was just my guess, any working code that can be run from bash script is oke...
thanks...
update
i found this nice program, it is easily installed by:
# 64bit version
wget -O xidel/xidel_0.9-1_amd64.deb https://sourceforge.net/projects/videlibri/files/Xidel/Xidel%200.9/xidel_0.9-1_amd64.deb/download
apt-get -y install libopenssl
apt-get -y install libssl-dev
apt-get -y install libcrypto++9
dpkg -i xidel/xidel_0.9-1_amd64.deb
it looks awsome, but i'm not really sure how to tweak it to my needs.
based on that link and the below answer, i guess a possible solution would be..
use xidel, or use "$ sed -n 's/.href="([^"]).*/\1/p' file" as suggested in this link, but then tweak it to get the link with html tags like:
< a href="https://always/same/link/same-name_17.html">always same text< /a>
then filter out all that doesn't end with ( ">always same text< /a> )
and then use the grep sort as mentioned below.
Continuing from the comment, you can use grep, sort and tail to isolate the highest number of your list of similar links without too much trouble. For example, if you list of links is as you have described (I've saved them in a file dat/links.txt for the purpose of the example), you can easily isolate the highest number in a variable:
Example List
$ cat dat/links.txt
always same text
always same text
always same text
Parsing the Highest Numbered Link
$ myvar=$(grep -o 'https:.*[.]html' dat/links.txt | sort | tail -n1); \
echo "myvar : '$myvar'"
myvar : 'https://always/same/link/same-name_19.html'
(note: the command above is all one line separate by the line-continuation '\')
Applying Directly to Results of curl
Whether your list is in a file, or returned by curl -s, you can apply the same approach to isolate the highest number link in the returned list. You can use process substitution with the curl command alone, or you can pipe the results to grep. E.g. as noted in my original comment,
$ myvar=$(grep -o 'https:.*[.]html' < <(curl -s "$url/$param") | sort | tail -n1); \
echo "myvar : '$myvar'"
or pipe the result of curl to grep,
$ myvar=$(curl -s "$url/$param" | grep -o 'https:.*[.]html' | sort | tail -n1); \
echo "myvar : '$myvar'"
(same line continuation note.)
Why not use Xidel with xquery to sort the links and return the last?
xidel -q links.txt --xquery "(for $i in //#href order by $i return $i)[last()]" --input-format xml
The input-format parameter makes sure you don't need any html tags at the start and ending of your txt file.
If I'm not mistaken, in the latest Xidel the -q (quiet) param is replaced by -s (silent).
I want to extract just the first filename from a remote zip archive without downloading the entire zip. In particular, I'm trying to get the build number of dartium (link to zip file). Since the file is quite large, I don't want to download the entire thing.
If I download the entire thing, unzip -l reports the first file as being: 0 2013-04-07 12:18 dartium-lucid64-inc-21033.0/. I want to get just this filename so I can parse out the 21033 portion as the build number.
I was doing this (total hack):
_url="https://storage.googleapis.com/dartium-archive/continuous/dartium-lucid64.zip"
curl -s $_url | head -c 256 | sed -n "s:.*dartium-lucid64-inc-\([0-9]\+\).*:\1:p"
It was working when I had my shell in ASCII mode, but I recently switched it to UTF-8 and it seems sed is now honoring that, which breaks my script.
I thought about hacking it by doing:
export LANG=
curl -s ...
But that seemed like an even bigger hack.
Is there a better way?
Firstly, you can set bytes range using curl.
Next, use "strings" to extract all strings from binary stream.
Add "q" after "p" to quit after find only first occurrence.
curl -s $_url -r0-256 | strings | sed -n "s:.*dartium-lucid64-inc-\([0-9]\+\).*:\1:p;q"
Or this:
curl -s $_url -r0-256 | strings | sed -n "/dartium-lucid64/{s:.*-\([^-]\+\)\/.*:\1:p;q}"
It must be a bit faster and more reliable. Also it extracts full version, including subversion (if you need it).
I know it's possible to open links in an html page (let's say, if you're using Firefox) with TextMate if the link has this format:
View
But is it possible to do a similar thing with VIM? Perhaps like so:
View
Ideally this would use an existing VIM session.
Cheers,
Bernie
Found a way to do it:
Add a Protocol handler to Firefox
Open firefox and navigate to about:config
Add the following keys
network.protocol-handler.warn-external.txmt boolean false
network.protocol-handler.external.txmt boolean true
#the last one is the path to the script we're about to create
network.protocol-handler.app.txmt string ~/protocol_handler/prot.sh
# I ended up needing this one as well on another machine, (no idea why)
network.protocol-handler.expose.txmt boolean false
Create the script ~/protocol_handler/prot.sh
Copy and paste the following into the file:
#! /usr/bin/env ruby
file_result = ARGV[0].scan(/file\:\/\/((\w|\/|\.)*).*/)
file_path = file_result[0][0]
line_result = ARGV[0].scan(/\&\;line\=(\d*).*/)
if line_result
line = line_result[0][0]
system "gvim --remote-silent +#{line} #{file_path}"
else
system "gvim --remote-silent #{file_path}"
end
Save the file.
Change the file mode to be executable:
$ chmod +x ~/protocol_handler/prot.sh
I'm not sure if you have to restart Firefox or not.
If you actually want to use the "vim://" protocol just change the ending on the network keys from txmt to vim. Since several Rails plugins (rails-footer, namely) out there already use txmt, I just used that to avoid recoding.
Have fun!
Berns
http://www.mozilla.org/projects/netlib/new-handler.html
To get tmxt:// links working with gedit, I had to use a bash script from #Rystraum's related answer instead of the Ruby, ~/bin/txmt_proto.bash:
#!/bin/bash
FILE=$1
FILE=$(echo $FILE | grep -o "file:/\/.\+" | cut -c 8- | sed -e 's/%2F/\//g')
LINE=$(echo $FILE | grep -o "\&line=[0-9]\+")
LINE=$(echo $LINE | grep -o "[0-9]\+")
FILE=$(echo $FILE | grep -o "\(.\+\)\&")
FILE=$(echo $FILE | cut -d'&' -f1)
gedit +$LINE $FILE
and change the Firefox config network.protocol-handler.app.txmt to point at the script:
network.protocol-handler.app.txmt string ~/bin/txmt_proto.bash
I am trying to get Update queries from a list of files using this script.I need to take lines containing "Update" alone and not "Updated" or "UpdateSQL"As we know all update queries contain set I am using that as well.But I need to remove cases like Updated and UpdatedSQL can anyone help?
nawk -v file="$TEST" 'BEGIN{RS=";"}
/[Uu][Pp][Dd][Aa][Tt][Ee] .*[sS][eE][tT]/{ gsub(/.*UPDATE/,"UPDATE");gsub(/.*Update/,"Update");gsub(/.*update/,"update");gsub(/\n+/,"");print file,"#",$0;}
' "$TEST" >> $OUT
This seems more readable to me without all the [Uu] and it doesn't require grep:
{ line=tolower($0); if (line ~ /update .*set/ && line !~ /updated|updatesql/) { gsub ...
you can try using grep first, (and i assume you are on Solaris.)
grep -i "update.*set" "$TEST" | egrep -vi "updatesql|updated" | nawk .....