Writing a comparison BATCH file to verify sha256sum to released code - bash

Trying to write a script that takes 2 arguments ($1 and $2) one to represent the $hash and the $file_name.
I am trying to utilize jq to parse the required data to download and compare PASS or FAIL.
I see to be stuck trying to think this out.
Here is my code
#!/usr/bin/env sh
#
# Sifchain shasum check (revised).
#
# $1
hash_url=$( curl -R -s https://api.github.com/repos/Sifchain/sifnode/releases | jq '.[] | select(.name=="v0.10.0-rc.4")' | jq '.assets[]' | jq 'select(.name=="sifnoded-v0.10.0-rc.4-linux-amd64.zip.sha256")' | jq '.browser_download_url' | xargs $1 $2 )
echo $hash_url
# $2
hash=$( curl -s -L $hash_url | jq'.$2')
file_name=$(curl -R -s https://api.github.com/repos/Sifchain/sifnode/releases | jq '.[] | .name')
#
#
echo $hash | sha256sum
echo $file_name | sha256sum #null why?
echo "\n"
## version of the release $1, and the hash $2
## sha256 <expected_sha_256_sum> <name_of_the_file>
sha256() {
if echo "$1 $2" #| sha256sum -c --quiet
then
echo pass $1 $2
exit 0
else
echo FAIL $1 $2
exit 1
fi
}
# Invoke sha256
sha256 $hash_url $file_name
Ideally this should work for any comparison of hash with correct file, pulling the 2 parameters when the BASH script is invoked.

I can suggest the following corrections/modifications:
#!/bin/bash
#sha file
SHA_URL=$(curl -R -s https://api.github.com/repos/Sifchain/sifnode/releases | \
jq --arg VERSION v0.10.0-rc.4 -r \
'.[] | select(.name==$VERSION) | .assets[] | select(.name |test("\\.sha256$")) | .browser_download_url')
SHA_VALUE=$(curl -s -L $SHA_URL| tr 1 2)
FILENAME=$(curl -R -s https://api.github.com/repos/Sifchain/sifnode/releases | \
jq --arg VERSION v0.10.0-rc.4 -r \
'.[] | select(.name==$VERSION) | .assets[] | select(.content_type =="application/zip") | .name')
#added just for testing, I'm assuming you have the files locally allready
FILEURL=$(curl -R -s https://api.github.com/repos/Sifchain/sifnode/releases | \
jq --arg VERSION v0.10.0-rc.4 -r \
'.[] | select(.name==$VERSION) | .assets[] | select(.content_type =="application/zip") | .browser_download_url')
wget --quiet $FILEURL -O $FILENAME
echo $SHA_VALUE $FILENAME | sha256sum -c --quiet >/dev/null 2>&1
RESULT=$?
if [ $RESULT -eq 0 ]; then
echo -n "PASS "
else
echo -n "FAIL "
fi
echo $SHA_VALUE $FILENAME
exit $RESULT
Notes:
jq
--arg VERSION v0.10.0-rc.4 creates a "variable" to be used in the script
-r - raw output, strings are not quoted
test("\\.sha256$") - regular expresion, used to search for a generic sha256, so you don't have to hardcode the full name
select(.content_type =="application/zip") - I'm assuming that's the file you are searching for
wget is used just for demo purpose, to download the file, I'm assuming you already have the file on your machine
sha256sum -c --quiet >/dev/null 2>&1 - redirecting to /dev/null is necessary because in case of error sha256sum is not quiet

Related

Get first match from a CURL grep call

Objective:
I'm trying to write a script that will fetch two URLs from a GitHub release page and do something different with each one.
So far:
Here's what I've got so far.
λ curl -s https://api.github.com/repos/mozilla-iot/gateway/releases/latest | grep "browser_download_url.*tar.gz" | cut -d : -f 2,3 | tr -d \"
This will return the following:
"https://github.com/mozilla-iot/gateway/releases/download/0.8.1/gateway-8c29257704ddb021344bdaaa790909a0eacf3293bab94e02859828a6fd9b900a.tar.gz"
"https://github.com/mozilla-iot/gateway/releases/download/0.8.1/node_modules-921bd0d58022aac43f442647324b8b58ec5fdb4df57a760e1fc81a71627f526e.tar.gz"
I want to be able to create some directories, pull in the first one, navigate in the directories from the newly pulled zip after extracting it, and then pull in the second.
fetching the first line is easy by piping the output to head -n1. for solving your problem, you need more than just fetching the first URL of the cURL output. give this a try:
#!/bin/bash
# fetch your URLs
answer=`curl -s https://api.github.com/repos/mozilla-iot/gateway/releases/latest | grep "browser_download_url.*tar.gz" | cut -d : -f 2,3 | tr -d \"`
# get URLs and file names
first_file=`echo "$answer" | grep -Eo '.+?\.tar\.gz' | head -n1 | tr -d " "`
second_file=`echo "$answer" | grep -Eo '.+?\.tar\.gz' | head -n2 | tail -1 | tr -d " "`
first_file_name=`echo "$answer" | grep -Eo '[^/]+?\.tar\.gz' | head -n1 `
second_file_name=`echo "$answer" | grep -Eo '[^/]+?\.tar\.gz' | head -n2 | tail -1`
#echo $first_file
#echo $first_file_name
#echo $second_file_name
#echo $second_file
# download first file
wget "$first_file"
# extracting first one that must be in the current directory.
# else, change the directory first and put the path before $first_file!
tar -xzf "$first_file_name"
# do your stuff with the second file
You can simply pipe the URLs to xargs curl;
curl -s https://api.github.com/repos/mozilla-iot/gateway/releases/latest |
grep "browser_download_url.*tar.gz" |
cut -d : -f 2,3 | tr -d \" |
xargs curl -O
Or if you want to do some more manipulation on each URL, perhaps loop over the results:
curl ... | grep ... | cut ... | tr ... |
while IFS= read -r url; do
curl -O "$url"
: maybe do things with "$url" here
done
The latter could easily be extended to someting like
... | while IFS= read -r url; do
d=${url##*/}
mkdir -p "$d"
( cd "$d"
curl -O "$url"
tar zxf *.tar.gz
# end of subshell means effects of "cd" end
)
done

Echo the command result in a file.txt

I have a script such as :
cat list_id.txt | while read line; do for ACC in $line;
do
echo -n "$ACC\t"
curl -s "link=fasta&retmode=xml" |\
grep TSeq_taxid |\
cut -d '>' -f 2 |\
cut -d '<' -f 1 |\
tr -d "\n"
echo
sleep 0.25
done
done
This script allows me from a list of ID in list_id.txt to get the corresponding names in a database in https://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?db=nuccore&id=${ACC}&rettype=fasta&retmode=xml
So from this script I get something like
CAA42669\t9913
V00181\t7154
AH002406\t538120
And what I would like is directly to print or echo this result in fiel call new_ids.txt, I tried echo >> new_ids.txt but the file is empty.
Thanks for your help.
A minimal refactoring of your script might look like
# Avoid useless use of cat
# Use read -r
# Don't use upper case for private variables
while read -r line; do
for acc in $line; do
echo -n "$acc\t"
# No backslash necessary after | character
curl -s "link=fasta&retmode=xml" |
# Probably use a proper XML parser for this
grep TSeq_taxid |
cut -d '>' -f 2 |
cut -d '<' -f 1 |
tr -d "\n"
echo
sleep 0.25
done
done <list_id.txt >new_ids.txt
This could probably still be simplified significantly, but without knowledge of what your input file looks like exactly, or what curl returns, this is somewhat speculative.
tr -s ' \t\n' '\n' <list_id.txt |
while read -r acc; do
curl -s "link=fasta&retmode=xml" |
awk -v acc="$acc" '/TSeq_taxid/ {
split($0, a, /[<>]/); print acc "\t" a[3] }'
sleep 0.25
done <list_id.txt >new_ids.txt

explain this shell script to me please

I am confused with this make.sh file. I read previous posts about shell scripts structure but I could not find out about this file. what is the function of this file? ....
Can anyone explain it step by step?
#!/bin/sh
rm out/*
example_number=0
for name in `ls in`
do
out=`cat in/$name | grep ".o " | tr -s \ | cut -d\ -f2`
inp=`cat in/$name | grep ".i " | tr -s \ | cut -d\ -f2`
echo -n "${name} (i=${inp}, o=${out}) "
if [ $inp -le 12 ]
then
cat in/$name \
| sed '/.i/d' \
| sed '/.o/d' \
| sed '/.p/d' \
| sed '/.e/d' \
| sed 's/|/ /g' \
| tr -s \ \
| sed 's/^[ \t]*//;s/[ \t]*$//' \
> out/${name}.in
tst=`cat out/${name}.in | cut -d\ -f2 | grep - -c`
if [ $tst -ne 0 ]
then
echo "remove file"
rm out/${name}.in
else
echo processing...
./unix2dos.exe -q out/${name}.in
example_number=`expr $example_number + 1`
fi
else
echo " skip"
fi
done
for name in `grep 2 -l out/*`
do
echo Remove $name
rm $name
example_number=`expr $example_number - 1`
done
echo Number of examples is $example_number
# bad files
# apla ( 222? )
# tms
# mainpa...
It is not a Makefile, but a shell script. You can see this from the file extension .sh and from the header
#!/bin/sh
Which is the instruction to use the shell to execute this file.

wget bash function without messy output

I am learning to customize wget in a bash function and having trouble. I would like to display Downloading (file):% instead of the messy output of wget. The function below seems close I am having trouble calling it for my specific needs.
For example, my standard wget is:
cd 'C:\Users\cmccabe\Desktop\wget'
wget -O getCSV.txt http://xxx.xx.xxx.xxx/data/getCSV.csv
and that downloads the .csv as a .txt in the directory specified with all the messy wget output.
This function seems like it will do more-or-less what I need, but I can not seem to get it to function correctly using my data. Below is what I have tried. Thank you :).
#!/bin/bash
download() {
local url=$1 wget -O getCSV.txt http://xxx.xx.xxx.xxx/data/getCSV.csv
local destin=$2 'C:\Users\cmccabe\Desktop\wget'
echo -n " "
if [ "$destin" ]; then
wget --progress=dot "$url" -O "$destin" 2>&1 | grep --line-buffered "%" | \
sed -u -e "s,\.,,g" | awk '{printf("\b\b\b\b%4s", $2)}'
else
wget --progress=dot "$url" 2>&1 | grep --line-buffered "%" | \
sed -u -e "s,\.,,g" | awk '{printf("\b\b\b\b%4s", $2)}'
fi
echo -ne "\b\b\b\b"
echo " DONE"
}
EDITED CODE
#!/bin/bash
download () {
url=http://xxx.xx.xxx.xxx/data/getCSV.csv
destin='C:\Users\cmccabe\Desktop\wget'
echo -n " "
if [ "$destin" ]; then
wget -O getCSV.txt --progress=dot "$url" -O "$destin" 2>&1 | grep --line-buffered "%" | \
sed -u -e "s,\.,,g" | awk '{printf("\b\b\b\b%4s", $2)}'
else
wget -O getCSV.txt --progress=dot $url 2>&1 | grep --line-buffered "%" | \
sed -u -e "s,\.,,g" | awk '{printf("\b\b\b\b%4s", $2)}'
fi
echo -ne "\b\b\b\b"
echo " DONE"
menu
}
menu() {
while true
do
printf "\n Welcome to NGS menu (v1), please make a selection from the MENU \n
==================================\n\n
\t 1 Patient QC\n
==================================\n\n"
printf "\t Your choice: "; read menu_choice
case "$menu_choice" in
1) patient ;;
*) printf "\n Invalid choice."; sleep 2 ;;
esac
done
}

OpenLDAP: get `directory` from cn=config

How can I get directory for specified DN by one ldapsearch request?
I mean - I have few databases. OpenLDAP configured with cn=config. For each DN - it hve own ldif-file, where it's olcDbDirectory specified.
Can I obtain olcDbDirectory value for each DN?
For backup script - I need to set varibale which contains directory, and this variable changes every time for every DN, wich backuped/restored at this moment.
So - in bash I just found solution to create function like:
#!/bin/bash
getDir () {
file=`grep -R "$1" /etc/openldap/slapd.d/ | cut -d":" -f 1 | tail -n 1`
echo $file
dir=`cat $file | grep "olcDbDirectory" | awk '{print $2}'`
echo $dir
}
getDir testdb;
$ ./dn.sh
/etc/openldap/slapd.d/cn=config/olcDatabase={9}bdb.ldif
/var/lib/ldap/testdb
But this solution seems not tidy... And I'd preffered to use something like:
getDir () {
dir=`ldapsearch -x -D "cn-root,cn=config" "*somefilter*"
}
Here it is:
$ ldapsearch -x -LLL -D 'cn=root,cn=config' -w PassWord -b 'cn=config' '(&(olcDbDirectory=*)(olcSuffix='testdb'))' olcDbDirectory | grep "olcDbDirectory" | cut -d":" -f 2
/var/lib/ldap/testdb
Of in bash function:
#!/bin/bash
getDir () {
dirtodel=`ldapsearch -x -LLL -D 'cn=root,cn=config' -w PassWord -b 'cn=config' '(&(olcDbDirectory=*)(olcSuffix='${1}'))' olcDbDirectory | grep "olcDbDirectory" | cut -d":" -f 2`
echo $dirtodel
}
getDir 'dc=testdb'
Result:
$ ./dn.sh
/var/lib/ldap/testdb

Resources