Index.html of the curl command looks like below.
<html>
<head><title>Index of myorg/release/builds/production/</title>
</head>
<body>
<h1>Index of myorg/release/builds/production/</h1>
<pre>Name Last modified Size</pre><hr/>
<pre>../
1.0.60/ 06-Jul-2022 07:47 -
1.0.63/ 06-Jul-2022 10:21 -
1.0.64/ 09-Jul-2022 18:08 -
1.0.65/ 09-Jul-2022 18:42 -
1.0.71/ 10-Jul-2022 10:23 -
1.0.73/ 14-Jul-2022 17:28 -
1.0.75/ 20-Jul-2022 07:25 -
{STOCKIO}/ 24-May-2022 11:09 -
dashboard-react-module-1.0.29.tar.gz 24-May-2022 07:27 87.74 MB
dashboard-react-module-1.0.29.tar.gz.md5 24-May-2022 07:27 32 bytes
dashboard-react-module-1.0.29.tar.gz.sha1 24-May-2022 07:27 40 bytes
dashboard-react-module-1.0.29.tar.gz.sha256 24-May-2022 07:27 64 bytes
dashboard-react-module.tar.gz 24-May-2022 07:27 87.74 MB
dashboard-react-module.tar.gz.md5 24-May-2022 07:27 32 bytes
dashboard-react-module.tar.gz.sha1 24-May-2022 07:27 40 bytes
</pre>
<hr/><address style="font-size:small;">Artifactory/6.23.41 Server .myorg.com Port 80</address></body></html>
I'm unable to construct a logic to find the largest entry in the file, here its - 1.0.75
I tried grepping only the numbers like - grep -E "[[:digit:]]\.[[:digit:]]\.[[:digit:]]{1,4}" index.html but it throws the same output as above.
My idea is to get all the numeric entries like 1.0.60, 1.0.63 ... in to an array, cut the last part of the number and compare them to get the largest number, but, unable to find the right grep command that gives only the numeric values.
Or is there a much efficient way to do it ?
Using sed to filter the data, sort to arrange (in case unsorted) and tail to show the last (largest) entry
$ sed -En '/href/s~[^>]*>([0-9][^/]*).*~\1~p' input_file | sort -n | tail -1
1.0.75
Match lines containing the string href
Capture within parenthesis the match and exclude everthing else
Return the match with backrefence \1
sort the piped output by numbers
Print the last line (highest value)
With your shown samples and attempts, please try following GNU awk+ sort with head solution.
awk 'match($0,/<a href="([0-9]+(\.[0-9]+)*)/,arr){print arr[1] | "sort -rV | head -1"}' Input_file
Explanation: Using awk program to parse Input_file to it. Where using its match function in which using regex <a href="([0-9]+(\.[0-9]+)*/) where it creates capturing group of matched values to only have versions in it. GNU awk capabilities to store matched regex values as values into array so creating arr array which will contain only version values. Then using | to run BASH command sort -rV(Version sort) to get it revers order sort(descending order) and once all values ae printed; sending this output to head command and printing only very first output which will be highest version only.
No doubt a lot of ways to do this..
cat foo1.x | grep 'href="[0-9]' | sed -E 's/.*href=.1.0.([0-9]+).*/\1/' | sort -u -n | tail -1
Versions are sorted in index.html..
Getting the last one
awk -F'["/]' '/href="([0-9]+\.[0-9]+\.[0-9]+)\/"/{n=$2}END{print n}' index.html
1.0.75
If versions are not sorted
awk -F'["/]' '
/href="([0-9]+\.[0-9]+\.[0-9]+)\/"/ { a[NR]=$2 }
END{
asorti(a,b,"#val_num_desc");
print a[b[1]]
}
' index.html
1.0.75
Related
I was trying to solve one of my old assignment I am literally stuck in this one Can anyone help me?
There is a file called "datafile". This file has names of some friends and their
ages. But unfortunately, the names are not in the correct format. They should be
lastname, firstname
But, by mistake they are firstname,lastname
The task of the problem is writing a shell script called fix_datafile
to correct the problem, and sort the names alphabetically. The corrected filename
is called datafile.fix .
Please make sure the original structure of the file should be kept untouched.
The following is the sample of datafile.fix file:
#personal information
#******** Name ********* ***** age *****
Alexanderovich,Franklin 47
Amber,Christine 54
Applesum,Franky 33
Attaboal,Arman 18
Balad,George 38
Balad,Sam 19
Balsamic,Shery 22
Bojack,Steven 33
Chantell,Alex 60
Doyle,Jefry 45
Farland,Pamela 40
Handerman,jimmy 23
Kashman,Jenifer 25
Kasting,Ellen 33
Lorux,Allen 29
Mathis,Johny 26
Maxter,Jefry 31
Newton,Gerisha 40
Osama,Franklin 33
Osana,Gabriel 61
Oxnard,George 20
Palomar,Frank 24
Plomer,Susan 29
Poolank,John 31
Rochester,Benjami 40
Stanock,Verona 38
Tenesik,Gabriel 29
Whelsh,Elsa 21
If you can use awk (I suppose you can), than this there's a script which does what you need:
#!/bin/bash
RESULT_FILE_NAME="datafile.new"
cat datafile.fix | head -4 > datafile.new
cat datafile.fix | tail -n +5 | awk -F"[, ]" '{if(!$2){print()}else{print($2","$1, $3)}}' >> datafile.new
Passing -F"[, ]" allows awk to split columns both by , and space and all that remains is just print columns in a needed format. The downsides are that we should use if statement to preserve empty lines and file header also should be treated separately.
Another option is using sed:
cat datafile.fix | sed -E 's/([a-zA-Z]+),([a-zA-Z]+) ([0-9]+)/\2,\1 \3/g' > datafile.new
The downside is that it requires regex that is not as obvious as awk syntax.
awk -F[,\ ] '
!/^$/ && !/^#/ {
first=$1;
last=$2;
map[first][last]=$0
}
END {
PROCINFO["sorted_in"]="#ind_str_asc";
for (i in map) {
for (j in map[i])
{
print map[i][j]
}
}
}' namesfile > datafile.fix
One liner:
awk -F[,\ ] '!/^$/ && !/^#/ { first=$1;last=$2;map[first][last]=$0 } END { PROCINFO["sorted_in"]="#ind_str_asc";for (i in map) { for (j in map[i]) { print map[i][j] } } }' namesfile > datafile.fix
A solution completely in gawk.
Set the field separator to both , and space. Then ignore any lines that are empty or start with #. Mark the first and last variables based on the delimited fields and then create a two dimensional array called map indexed by first and last name and the value equal to the line. At the end, set the sort to indices string ascending and loop through the array printing the names in order as requested.
Completely in bash:
re="^[[:space:]]*([^#]([[:space:]]|[[:alpha:]])+),(([[:space:]]|[[:alpha:]])*[[:alpha:]]) *([[:digit:]]+)"
while read line
do
if [[ ${line} =~ $re ]]
then
echo ${BASH_REMATCH[3]},${BASH_REMATCH[1]} ${BASH_REMATCH[5]}
else
echo "${line}"
fi
done < names.txt
The core of this is to capture, using bash regex matching (=~ operator of the [[ command), parenthesis groupings, and the BASH_REMATCH array, the name before the comma (([^#]([[:space:]]|[[:alpha:]])+)), the name after the comma ((([[:space:]]|[[:alpha:]])*[[:alpha:]])), and the age ( *([[:digit:]]+)). The first-name regex is constructed so as to exclude comments, and the last-name regex is constructed as to handle multiple spaces before the age without including them in the name. Preconditions: Commented lines with or without leading spaces (^[[:space:]]*([^#]), or lines without a comma, are passed through unchanged. Either first names or last names may have internal spaces. Once the last name and first name are isolated, it is easy to print them in reverse order followed by the age (echo ${BASH_REMATCH[3]},${BASH_REMATCH[1]} ${BASH_REMATCH[5]}). Note that the letter/space groupings are counted as matches which is why we skip 2 and 4.
I have tried using awk and sed.
Try if this works
less dataflie.fix | sed 's/ /,/g' | awk -F "," '{print $2,$1,$3}' | sed 's/ /,/' | sed 's/^,//' | sort -u > dataflie_new.fix
I have a text file (bigfile.txt) with thousands of rows. I want to make a smaller text file with 1 % of the rows which are randomly chosen. I tried the following
output=$(wc -l bigfile.txt)
ds1=$(0.01*output)
sort -r bigfile.txt|shuf|head -n ds1
It give the following error:
head: invalid number of lines: ‘ds1’
I don't know what is wrong.
Even after you fix your issues with your bash script, it cannot do floating point arithmetic. You need external tools like Awk which I would use as
randomCount=$(awk 'END{print int((NR==0)?0:(NR/100))}' bigfile.txt)
(( randomCount )) && sort -r file | shuf | head -n "$randomCount"
E.g. Writing a file with with 221 lines using the below loop and trying to get random lines,
tmpfile=$(mktemp /tmp/abc-script.XXXXXX)
for i in {1..221}; do echo $i; done >> "$tmpfile"
randomCount=$(awk 'END{print int((NR==0)?0:(NR/100))}' "$tmpfile")
If I print the count, it would return me a integer number 2 and using that on the next command,
sort -r "$tmpfile" | shuf | head -n "$randomCount"
86
126
Roll a die (with rand()) for each line of the file and get a number between 0 and 1. Print the line if the die shows less than 0.01:
awk 'rand()<0.01' bigFile
Quick test - generate 100,000,000 lines and count how many get through:
seq 1 100000000 | awk 'rand()<0.01' | wc -l
999308
Pretty close to 1%.
If you want the order random as well as the selection, you can pass this through shuf afterwards:
seq 1 100000000 | awk 'rand()<0.01' | shuf
On the subject of efficiency which came up in the comments, this solution takes 24s on my iMac with 100,000,000 lines:
time { seq 1 100000000 | awk 'rand()<0.01' > /dev/null; }
real 0m23.738s
user 0m31.787s
sys 0m0.490s
The only other solution that works here, heavily based on OP's original code, takes 13 minutes 19s.
I have a set of CVS files spanning over 70GB, with 35GB being about the field i'm interested in (with around 100 Bytes for this field in each row)
The data are highly duplicated (a sampling show that the top 1000 cover 50%+ of the rows) and I'm interested in getting the total uniq count
With a not so large data set I would do
cat my.csv | cut -f 5 | sort | uniq -c | sort --numeric and it works fine
However the problem I have is that (to my understanding) because of the intermediate sort , this command will need to hold in RAM (and then on disk because it does not fit my 16Go of RAM) the whole set of data, to after stream it to uniq -c
I would like to know if there's a command /script awk/python to do the sort | uniq -c in one step so that the RAM consumption should be far lower ?
You can try this:
perl -F, -MDigest::MD5=md5 -lanE 'say unless $seen{ md5($F[4]) }++' < file.csv >unique_field5.txt
it will holds in the memory 16byte long md5-digest for every unique field-5 (e.g. $F[4]). Or you can use
cut -d, -f5 csv | perl -MDigest::MD5=md5 -lnE 'say unless $seen{md5($_)}++'
for the same result.
Of course, the md5 isn't cryptographically safe these days, but probably will be enough for sorting... Of course, it is possible to use sha1 or sha256, just use the -MDigest::SHA=sha255. Of course, the sha-digests are longer - e.g. needs more memory.
It is similar as the awk linked in the comments, with a difference, here is used as hash-key not the whole input line, but just the 16byte long MD5 digest.
EDIT
Because me wondering about the performance, created this test case:
# this perl create 400,000,000 records
# each 100 bytes + attached random number,
# total size of data 40GB.
# each invocation generates same data (srand(1))
# because the random number is between 0 - 50_000_000
# here is approx. 25% unique records.
gendata() {
perl -E '
BEGIN{ srand(1) }
say "x"x100, int(rand()*50_000_000) for 1..400_000_000
'
}
# the unique sorting - by digest
# also using Devel::Size perl module to get the final size of the data hold in the memory
# using md5
domd5() {
perl -MDigest::MD5=md5 -MDevel::Size=total_size -lnE '
say unless $seen{md5($_)}++;
END {
warn"total: " . total_size(\%seen);
}'
}
#using sha256
dosha256() {
perl -MDigest::SHA=sha256 -MDevel::Size=total_size -lnE '
say unless $seen{sha256($_)}++;
END {
warn"total: " . total_size(\%seen);
}'
}
#MAIN
time gendata | domd5 | wc -l
time gendata | dosha256 | wc -l
results:
total: 5435239618 at -e line 4, <> line 400000000.
49983353
real 10m12,689s
user 12m43,714s
sys 0m29,069s
total: 6234973266 at -e line 4, <> line 400000000.
49983353
real 15m51,884s
user 18m23,900s
sys 0m29,485s
e.g.:
for the md5
memory usage: 5,435,239,618 bytes - e.g. appox 5.4 GB
unique records: 49,983,353
time to run: 10 min
for the sha256
memory usage: 6,234,973,266 bytes - e.g. appox 6.2 GB
unique records: 49,983,353
time to run: 16 min
In contrast, doing the plain-text unique search using the "usual" approach:
doplain() {
perl -MDevel::Size=total_size -lnE '
say unless $seen{$_}++;
END {
warn"total: " . total_size(\%seen);
}'
}
e.g running:
time gendata | doplain | wc -l
result:
memory usage is much bigger: 10,022,600,682 - my notebook with 16GB RAM starts heavy swapping (as having SSD, so a not big deal - but still..)
unique records: 49,983,353
time to run: 8:30 min
Result?
just use the
cut -d, -f5 csv | perl -MDigest::MD5=md5 -lnE 'say unless $seen{md5($_)}++'
and you should get the unique lines enough fast.
You can try this:
split --filter='sort | uniq -c | sed "s/^\s*//" > $FILE' -b 15G -d "dataset" "dataset-"
At this point you should have around 5 dataset-<i> each of which should be much less that 15G.
To merge the file you can save the following bash script as merge.bash:
#! /bin/bash
#
read prev_line
prev_count=${prev_line%% *}
while read line; do
count="${line%% *}"
line="${line#* }" # This line does not handle blank lines correctly
if [ "$line" != "$prev_line" ]; then
echo "$prev_count $prev_line"
prev_count=$count
prev_line=$line
else
prev_count=$((prev_count + count))
fi
done
echo "$prev_count $prev_line"
And run the command:
sort -m -k 2 dataset-* | bash merge.sh > final_dataset.
Note: blank line are not handled correctly, if it suits your needs you can remove them from your dataset or correct merge.bash.
Is there any easy way to convert JCL SORT to Shell Script?
Here is the JCL SORT:
OPTION ZDPRINT
SORT FIELDS=(15,1,CH,A)
SUM FIELDS=(16,8,25,8,34,8,43,8,52,8,61,8),FORMAT=ZD
OUTREC BUILD=(14X,15,54,13X)
Only bytes 15 for a length of 54 are relevant from the input data, which is the key and the source values for the summation. Others bytes from the input are not important.
Assuming the data is printable.
The data is sorted on the one-byte key, and each value for records with the same key is summed, separately, for each of the six numbers. A single record is written, per key, with the summed values and with other data (those one bytes in between and at the end) from the first record. The sort is "unstable" (meaning that the order of records presented to the summation is not reproduceable from one execution to the next) so the byte values should theoretically be the same on all records, or be irrelevant.
The output, for each key, is presented as a record containing 14 blanks (14X) then the 54 bytes starting at position 15 (which is the one-byte key) and then followed by 13 blanks (13X). The numbers should be right-aligned and left-zero-filled [OP to confirm, and amend sample data and expected output].
Assuming the sum will only contain positive number and will not be signed, and that for any number which is less than 999999990 there will be leading zeros for any unused positions (numbers are character, right-aligned and left-zero-filled).
Assuming the one-byte key will only be alphabetic.
The data has already been converted to ASCII from EBCDIC.
Sample Input:
00000000000000A11111111A11111111A11111111A11111111A11111111A111111110000000000000
00000000000000B22222222A22222222A22222222A22222222A22222222A222222220000000000000
00000000000000C33333333A33333333A33333333A33333333A33333333A333333330000000000000
00000000000000A44444444B44444444B44444444B44444444B44444444B444444440000000000000
Expected Output:
A55555555A55555555A55555555A55555555A55555555A55555555
B22222222A22222222A22222222A22222222A22222222A22222222
C33333333A33333333A33333333A33333333A33333333A33333333
(14 preceding blanks and 13 trailing blanks)
Expected Volume: tenth thousands
I have figured an answer:
awk -v FIELDWIDTHS="14 1 8 1 8 1 8 1 8 1 8 1 8 13" \
'{if(!($2 in a)) {a[$2]=$2; c[$2]=$4; e[$2]=$6; g[$2]=$8; i[$2]=$10; k[$2]=$12} \
b[$2]+=$3; d[$2]+=$5; f[$2]+=$7; h[$2]+=$9; j[$2]+=$11; l[$2]+=$13;} END \
{for(id in a) printf("%14s%s%s%s%s%s%s%s%s%s%s%s%s%13s\n","",a[id],b[id],c[id],d[id],e[id],f[id],g[id],h[id],i[id],j[id],k[id],l[id],"");}' input
Explaination:
1) Split the string
awk -v FIELDWIDTHS="14 1 8 1 8 1 8 1 8 1 8 1 8 13"
2) Let $2 be the key and $4, $6, $8, $10, $12 will only set value for the first time
{if(!($2 in a)) {a[$2]=$2; c[$2]=$4; e[$2]=$6; g[$2]=$8; i[$2]=$10; k[$2]=$12}
3) Others will be summed up
b[$2]+=$3; d[$2]+=$5; f[$2]+=$7; h[$2]+=$9; j[$2]+=$11; l[$2]+=$13;} END
4) Print for each key
{for(id in a) printf("%14s%s%s%s%s%s%s%s%s%s%s%s%s%13s\n","",a[id],b[id],c[id],d[id],e[id],f[id],g[id],h[id],i[id],j[id],k[id],l[id],"");}
okay I have tried something
1) extracting duplicate keys from file and storing it in duplicates file.
awk '{k=substr($0,1,15);a[k]++}END{for(i in a)if(a[i]>1)print i}' sample > duplicates
OR
awk '{k=substr($0,1,15);print k}' sample | sort | uniq -c | awk '$1>1{print $2}' > duplicates
2) For duplicates, doing the calculation and creating newfile with specificied format
while read line
do
grep ^$line sample | awk -F[A-Z] -v key=$line '{for(i=2;i<=7;i++)f[i]=f[i]+$i}END{printf("%14s"," ");for(i=2;i<=7;i++){printf("%s%.8s",substr(key,15,1),f[i]);if(i==7)printf("%13s\n"," ")}}' > newfile
done < duplicates
3) for unique ones,format and append to newfile
grep -v -f duplicates sample | sed 's/0/ /g' >> newfile ## gives error if 0 is within data instead of start and end in a row.
OR
grep -v -f duplicates sample | awk '{printf("%14s%s%13s\n"," ",substr($0,15,54)," ")}' >> newfile
if you have any doubt, let me know.
I would love some help with a Bash script loop that will show all the differences between two binary files, using just
cmp file1 file2
It only shows the first change I would like to use cmp because it gives a offset an a line number of where each change is but if you think there's a better command I'm open to it :) thanks
I think cmp -l file1 file2 might do what you want. From the manpage:
-l --verbose
Output byte numbers and values of all differing bytes.
The output is a table of the offset, the byte value in file1 and the value in file2 for all differing bytes. It looks like this:
4531 66 63
4532 63 65
4533 64 67
4580 72 40
4581 40 55
[...]
So the first difference is at offset 4531, where file1's decimal octal byte value is 66 and file2's is 63.
Method that works for single byte addition/deletion
diff <(od -An -tx1 -w1 -v file1) \
<(od -An -tx1 -w1 -v file2)
Generate a test case with a single removal of byte 64:
for i in `seq 128`; do printf "%02x" "$i"; done | xxd -r -p > file1
for i in `seq 128`; do if [ "$i" -ne 64 ]; then printf "%02x" $i; fi; done | xxd -r -p > file2
Output:
64d63
< 40
If you also want to see the ASCII version of the character:
bdiff() (
f() (
od -An -tx1c -w1 -v "$1" | paste -d '' - -
)
diff <(f "$1") <(f "$2")
)
bdiff file1 file2
Output:
64d63
< 40 #
Tested on Ubuntu 16.04.
I prefer od over xxd because:
it is POSIX, xxd is not (comes with Vim)
has the -An to remove the address column without awk.
Command explanation:
-An removes the address column. This is important otherwise all lines would differ after a byte addition / removal.
-w1 puts one byte per line, so that diff can consume it. It is crucial to have one byte per line, or else every line after a deletion would become out of phase and differ. Unfortunately, this is not POSIX, but present in GNU.
-tx1 is the representation you want, change to any possible value, as long as you keep 1 byte per line.
-v prevents asterisk repetition abbreviation * which might interfere with the diff
paste -d '' - - joins every two lines. We need it because the hex and ASCII go into separate adjacent lines. Taken from: Concatenating every other line with the next
we use parenthesis () to define bdiff instead of {} to limit the scope of the inner function f, see also: How to define a function inside another function in Bash?
See also:
https://superuser.com/questions/125376/how-do-i-compare-binary-files-in-linux
https://unix.stackexchange.com/questions/59849/diff-binary-files-of-different-sizes
The more efficient workaround I've found is to translate binary files to some form of text using od.
Then any flavour of diff works fine.