I have a workable solution but it is not presentable/clean for public usage.
The file "version.txt" is both remote and local. The difference is the number:
Remote:
17 March 2022
FVWM myExtensions ver. 3.1.4
Local:
15 March 2022
FVWM myExtensions ver. 3.1.1
In my "poor" solution I manually changed the lines into one line for awk to find the last column and sed removing the dots between the number. Both results are made as variables.
awk '{print $NF}' download/version.txt > tmpGit.txt
VARgit=`sed 's|[.]||g' tmpGit.txt`
awk '{print $NF}' ~/.fvwm/version.txt > tmpLocal.txt
VARlocal=`sed 's|[.]||g' tmpLocal.txt`
if [ "$VARgit" -gt "$VARlocal" ]; then
echo "New update available.";
else
echo "No update.";
fi
I have not found a solution for finding the number in text lines and comparing multiple dot numbers.
Thank you in advance.
You could do this with grep, e.g.:
IFS=. read rmajor rminor rpatch < <(grep -oE '[0-9]+\.[0-9]+\.[0-9]+' remote.txt)
IFS=. read lmajor lminor lpatch < <(grep -oE '[0-9]+\.[0-9]+\.[0-9]+' local.txt)
[ $rmajor -gt $lmajor ] && echo "New major version"
[ $rminor -gt $lminor ] && echo "New minor version"
[ $rpatch -gt $lpatch ] && echo "New patchlevel"
Edit
So to test if remote.txt contains a newer version, assuming all version items are numerical, something like this works:
if [ $rmajor -gt $lmajor ]; then
echo "New major version"
elif [ $rmajor -eq $lmajor -a $rminor -gt $lminor ]; then
echo "New minor version"
elif [ $rmajor -eq $lmajor -a $rminor -eq $lminor -a $rpatch -gt $lpatch ]; then
echo "New patchlevel"
else
echo "Remote has same version or older."
fi
Using GNU sort for Version-sort:
$ awk -v OFS='\t' '/FVWM/{print $NF, FILENAME}' local remote | sort -k1,1Vr
3.1.4 remote
3.1.1 local
That tells you the version number in decreasing order and the name of the file containing each version number.
The above was run on these input files:
$ head local remote
==> local <==
15 March 2022
FVWM myExtensions ver. 3.1.1
==> remote <==
17 March 2022
FVWM myExtensions ver. 3.1.4
Given the above this will print the name of the file that contains the highest version number IF the 2 are different and print nothing otherwise:
$ awk -v OFS='\t' '/FVWM/{print $NF, FILENAME}' local remote |
sort -k1,1Vr |
awk '
{ vers[NR]=$1; files[NR]=$0; sub(/[^\t]+\t/,"",files[NR]) }
END{ if ( vers[1] != vers[2] ) print files[1] }
'
remote
so then you can just test if that is the remote file name and if so download.
finding the number in text line
This is easy to do using regular expression, for example using GNU AWK, let file.txt content be
17 March 2022
FVWM myExtensions ver. 3.1.4
then
awk 'BEGIN{FPAT="[0-9.]+[.][0-9.]+"}NF{print $1}' file.txt
output
3.1.4
Explanation: I inform GNU AWK that field consting 1 or more of digits or dot, literal dot (hence [.] rather than .), 1 or more of digits or dot. Then if one or more such field is present in given line do print 1st field (this solution assume you have at most 1 such field in each line)
(tested in gawk 4.2.1)
comparing multiple dot numbers
I do not know ready solution for this, but I want to note that this task might seems easy, but it is not unless you can enforce certain restriction. In your example you have 3.1.1 and 3.1.4 that is all elements are <10 so you will get expected result if you do comparison as for needs for alphabetic ordering. Consider what happens if you have 3.9 and 3.11 - latter will be consider earlier as first difference is at 3rd position and 1 is earlier in alphabet that 9. This problem is not present if you might enforce that all parts are consisting of exactly one digit.
Related
Currently, I want to update the minor version in a text file using a bash command. This is the format I am dealing with: MAJOR.Minor.BugFix. I am able to increment the BugFix version number but have been unable to increment just the minor version.
I.e
01.01.00-> 01.02.00
01.99.00-> 02.00.00
This is the code snippet I found online and was trying to tweak to update the minor instead of the bug fix
echo 01.00.1 | awk -F. -v OFS=. 'NF==1{print ++$NF}; NF>1{if(length($NF+1)>length($NF))$(NF-1)++; $NF=sprintf("%0*d", length($NF), ($NF+1)%(10^length($NF))); print}'
As -F takes a regular expression -F. will match any character. Do something like -F"[.]" to make it match periods and you can just split fields without any of the length() stuff.
larsks idea of splitting into multiple lines is a good one:
echo $a | awk -F'[.]' '{
major=$1;
minor=$2;
patch=$3;
minor += 1;
major += minor / 100;
minor = minor % 100;
printf( "%02d.%02d.%02d\n", major, minor, patch );
}'
You don't need AWK for this, just read with IFS=. will do.
Though in Bash, leading zeroes indicate octal so you'll need to guard against them.
IFS=. read -r major minor bugfix <<< "$1"
# Specify base 10 in case of leading zeroes (octal)
((major=10#$major, minor=10#$minor, bugfix=10#$bugfix))
if [[ $minor -eq 99 ]]; then
((major++, minor=0))
else
((minor++))
fi
printf '%02d.%02d.%02d\n' "$major" "$minor" "$bugfix"
Test run:
$ ./test.sh 01.01.00
01.02.00
$ ./test.sh 01.99.09
02.00.09
$ ./test.sh 1.1.1
01.02.01
Quick answer:
version=01.02.00
newversion="$(printf "%06d" "$(expr "$(echo $version | sed 's/\.//g')" + 100)")"
echo "${newversion:0:2}.${newversion:2:2}.${newversion:4:2}"
Full explanation:
version=01.02.00
# get the number without decimals
rawnumber="$(echo $version | sed 's/\.//g')"
# add 100 to number (to increment minor version)
sum="$(expr "$rawnumber" + 100)"
# make number 6 digits
newnumber="$(printf "%06d" "$sum")"
# add decimals back to number
newversion="${newnumber:0:2}.${newnumber:2:2}.${newnumber:4:2}"
echo "$newversion"
awk provides a simple and efficient way to handle updating the minor-version (and increment the major-version if the minor version is 99 and setting the minor-version zero), e.g.
awk -F'.' '{
if ($2 == 99) {
$1++
$2=0
}
else
$2++
printf "%02d.%02d.%02d\n", $1, $2 ,$3
}' minorver
Above the leading-zeros are are ignored when considered as a number and then it is just a simple comparison of the minor-version to determine whether to increment the major-version and zero the minor-version or simply increment the minor-version. The printf is used to provide the formatted output:
Example Use/Output
With your data in the file minorver, you can do:
$ awk -F'.' '{
> if ($2 == 99) {
> $1++
> $2=0
> }
> else
> $2++
> printf "%02d.%02d.%02d\n", $1, $2 ,$3
> }' minorver
01.02.00
02.00.00
Let me know if you have further questions.
Trying to use grep to find some information. Evaluate the information. Then perform a function.
Heres what I have, any help is appreciated.
FIXED TO:
#! /bin/bash
UT=$(/usr/sbin/system_profiler SPSoftwareDataType | grep "Time since boot" | grep "days")
if [ "$UT" -ge "5 days" ]; then
echo this
else
echo that
fi
SPSoftwareDataType looks like this:
System Software Overview:
System Version: OS X 10.9.5
Kernel Version: Darwin 13.4.0
Boot Volume: Macintosh HD
Boot Mode: Normal
Computer Name: xxxxxxxxxxxx
User Name: xxxxxxxxxx (xxxxxxxxxx)
Secure Virtual Memory: Enabled
Time since boot: 8 days 3:25
Trying using sysctl
#! /bin/bash
UT=$(awk -F":" ' $4 > 200 ' sysctl -n kern.boottime)
echo $UT
if [ “$UT” -ge “1430315296” ]; then
echo this
else
echo that
fi
I can't see how you expect awk to compare anything specified in days and hours with anything else... nor do I know why you would choose to parse system_profiler output.
Have you considered:
sysctl -n kern.boottime
{ sec = 1431023230, usec = 0 } Thu May 7 19:27:10 201
which will give the boot time in seconds since the epoch which is just a simple integer you can compare with other times?
So, you can parse out the seconds like this
UT=$(sysctl -n kern.boottime | awk -F"[ ,]+" '{print $4}')
the -F"[ ,]+" says to treat multiple spaces or commas as field separators.
You can do all in awk
awk '/Time since boot.*days/ {print ($4>5?"This":"That")}' file
This
awk '/Time since boot.*days/ {print ($4>8?"This":"That")}' file
That
Edit:
#!/bin/bash
/usr/sbin/system_profiler SPSoftwareDataType | awk '/Time since boot.*days/ {print ($4>8?"This":"That")}'
i have a data file called params.dat, and i'd like to change the values in the file each time i run my code.
here's what i got so far
i=7.0
k=0
i=0
while [ $i -lt 10]
do
sed "3s/.*/$j 6.9 $j/" "28s/.*/image$i.bmp/" params.dat
((i++))
((k++))
((j=j-0.1))
done
the goal is to change the 3rd and 28th line of the date file from
7.0 7.0 7.0
to
6.9 7.0 6.9
basically minus the first and third value by 0.1 each time
and change the 28th line from
image0.bmp
to
image1.bmp
so first time my program takes 7.0 7.0 7.0 and image0.bmp
second time i wish it to run 6.9 7.0 6.9 and image1.bmp
and so on...
can anyone give me some tips how to accomplish it?
thanks in advance!
Use awk:
#!/bin/bash
# set iter to be the number of times to execute the script
# alternatively use $1 and pass it as parameter to the script
iter=10
for (( i=1; i<=$iter; i++ )); do
awk 'NR==3 {print $1-0.1, $2, $3-0.1; FS="[e.]"}
NR==28 {print $1 "e" $2+1 "." $3}
NR!=3 && NR!=28 {print}' params.dat > .tmpfile
mv .tmpfile params.dat
done
The awk code will get to line 3 and subtract 0.1 from the first and third field, which are separated by a space by default. Then it will set the field separator to either an 'e' or a period. Then when we reach line 28 the line is split in three fields: 'imag' <field separator 'e'> '0' <field separator '.'> 'bmp' Where we increase the second field and print the result. All other lines will just be printed the way they were.
I want to know if it is possible to calculate the difference between two float number contained in a file in two distinct lines in one bash command line.
File content example :
Start at 123456.789
...
...
...
End at 123654.987
I would like to do an echo of 123654.987-123456.789
Is that possible? What is this magic command line ?
Thank you!
awk '
/Start/ { start = $3 } # 3rd field in line matching "Start"
/End/ {
end = $3; # 3rd field in line matching "End"
print end - start # Print the difference.
}
' < file
If you really want to do this on one line:
awk '/Start/ { start = $3 } /End/ { end = $3; print end - start }' < file
you can do this with this command:
start=`grep 'Start' FILENAME| cut -d ' ' -f 3`; end=`grep 'End' FILENAME | cut -d ' ' -f 3`; echo "$end-$start" | bc
You need the 'bc' program for this (for floating point math). You can install it with apt-get install bc, or yum, or rpm, zypper... OS specific :)
Bash doesn't support floating point operations. But you can split your numbers to parts and perform integer operations. Example:
#!/bin/bash
echo $(( ${2%.*} - ${1%.*} )).$(( ${2#*.} - ${1#*.} ))
Result:
./test.sh 123456.789 123654.987
198.198
EDIT:
Correct solution would be using not command line hack, but tool designed or performing fp operations. For example, bc:
echo 123654.987-123456.789 | bc
output:
198.198
Here's a weird way:
printf -- "-%s+%s\n" $(grep -oP '(Start|End) at \K[\d.]+' file) | bc
I don't know too much of bash scripting and I'm trying to develop a bash script to do this operations:
I have a lot of .txt files in the same directory.
Every .txt file follows this structure:
file1.txt:
<name>first operation</name>
<operation>21</operation>
<StartTime>1292435633</StartTime>
<EndTime>1292435640</EndTime>
<name>second operation</name>
<operation>21</operation>
<StartTime>1292435646</StartTime>
<EndTime>1292435650</EndTime>
I want to search every <StartTime> line and convert it to standard date/time format (not unix timestamp) but preserving the structure <StartTime>2010-12-15 22:52</StartTime>, for example. This could be a function of search/replace, using sed? I think I could use these function that I found: date --utc --date "1970-01-01 $1 sec" "+%Y-%m-%d %T"
I want to to do the same with <EndTime> tag.
I should do this for all *.txt files in a directory.
I tried using sed but with not wanted results. As I said I don't know so much of bash scripting so any help would be appreciated.
Thank you for your help!
Regards
sed is incapable of doing date conversions; instead I would reccomend you to use a more appropriate tool like awk:
echo '<StartTime>1292435633</StartTime>' | awk '{
match($0,/[0-9]+/);
t = strftime("%F %T",substr($0,RSTART,RLENGTH),1);
sub(/[0-9]+/,t)
}
{print}'
If your input files have one tag per line, as in your structure example, it should work flawlessly.
If you need to repeat the operation for every .txt file just use a shell for:
for file in *.txt; do
awk '/^<[^>]*Time>/{
match($0,/[0-9]+/);
t = strftime("%F %T",substr($0,RSTART,RLENGTH),1);
sub(/[0-9]+/,t)
} 1' "$file" >"$file.new"
# mv "$file.new" "$file"
done
In comparison to the previous code, I have done two minor changes:
added condition /^<[^>]*Time>/ that checks if the current line starts with or
converted {print} to the shorter '1'
If the files ending with .new contain the result you were expecting, you can uncomment the line containing mv.
Using grep:
while read line;do
if [[ $line == *"<StartTime>"* || $line == *"<EndTime>"* ]];then
n=$(echo $line | grep -Po '(?<=(>)).*(?=<)')
line=${line/$n/$(date -d #$n)}
fi
echo $line >> file1.new.txt
done < file1.txt
$ cat file1.new.txt
<name>first operation</name>
<operation>21</operation>
<StartTime>Wed Dec 15 18:53:53 CET 2010</StartTime>
<EndTime>Wed Dec 15 18:54:00 CET 2010</EndTime>
<name>second operation</name>
<operation>21</operation>
<StartTime>Wed Dec 15 18:54:06 CET 2010</StartTime>
<EndTime>Wed Dec 15 18:54:10 CET 2010</EndTime>