I want to retrieve the first X and the last Y characters from a string (standard ascii, so no worries about unicode).
I understand that I can do this as seperate actions, i.e. :
FIRST=$(echo foobar | head -c 3)
LAST=$(echo foobar | tail -c 3)
COMBINED= "${FIRST}${LAST}"
But is there a cleaner way to do this ?
I would prefer to use common standard utils (i.e. bash built-ins, sed, awk etc.). At a push, a Perl one-liner is OK, but no Python or anything else.
head + tail two answers, regarding -c switch
head + tail character based (with -c, reducing strings)
Under bash, you could
string=foobarbaz
echo ${string::3}${string: -3}
foobaz
But to avoid repetion in case of shorter strings:
if ((${#string}>6));then
echo ${string::3}${string: -3}
else
echo $string
fi
Full bash function
shrinkStr(){
local sep='..' opt OPTIND OPTARG string varname='' paddstr paddchr=' '
local -i maxlen=40 lhlen=15 rhlen padd=0
while getopts 'P:l:m:s:v:p' opt; do
case $opt in
l) lhlen=$OPTARG ;;
m) maxlen=$OPTARG ;;
p) padd=1 ;;
P) paddchr=$OPTARG ;;
s) sep=$OPTARG ;;
v) varname=$OPTARG ;;
*) echo Wrong arg.; return 1 ;;
esac
done
rhlen="maxlen-lhlen-${#sep}"
((rhlen<1)) && { echo bad lengths; return 1;}
shift $((OPTIND-1))
string="$*"
if ((${#string}>maxlen)) ;then
string="${string::lhlen}$sep${string: -rhlen}"
elif ((${#string}<maxlen)) && ((padd));then
printf -v paddstr '%*s' $((maxlen-${#string})) ''
string+=${paddstr// /$paddchr}
fi
if [[ $varname ]] ;then
printf -v "$varname" '%s' "$string"
else
echo "$string"
fi
}
Then
shrinkStr -l 4 -m 10 Hello world!
Hell..rld!
shrinkStr -l 2 -m 10 Hello world!
He..world!
shrinkStr -l 3 -m 10 -s '+++' Hello world!
Hel+++rld!
This work even with UTF-8 characters:
cnt=1;for str in Généralités Language Théorème Février 'Hello world!';do
shrinkStr -l5 -m11 -vOutstr -pP_ "$str"
printf ' %11d: |%s|\n' $((cnt++)) "$Outstr"
done
1: |Généralités|
2: |Language___|
3: |Théorème___|
4: |Février____|
5: |Hello..rld!|
cnt=1;for str in Généralités Language Théorème Février 'Hello world!';do
shrinkStr -l5 -m10 -vOutstr -pP_ "$str"
printf ' %11d: |%s|\n' $((cnt++)) "$Outstr"
done
1: |Génér..tés|
2: |Language__|
3: |Théorème__|
4: |Février___|
5: |Hello..ld!|
head + tail lines based (without -c, reducing files)
By using only one fork to sed.
Here is a little function I wrote for this:
headTail() {
local hln=${1:-10} tln=${2:-10} str;
printf -v str '%*s' $((tln-1)) '';
sed -ne "1,${hln}{p;\$q};$((hln+1)){${str// /\$!N;}};:a;\$!{N;D;ba};p"
}
Usage:
headTail <head lines> <tail lines>
Both argument default are 10.
In practice:
headTail 3 4 < <(seq 1 1000)
1
2
3
997
998
999
1000
Seem correct. Testing border case (where number of line are smaller than requested):
headTail 1 9 < <(seq 1 3)
1
2
3
headTail 9 1 < <(seq 1 3)
1
2
3
Taking more lines: (I will take 100 first and 100 last lines, but print only 2 Top lines, 4 Middle lines and 2 Bottom lines of headTail's output.):
headTail 100 100 < <(seq 1 2000)|sed -ne '1,2s/^/T /p;99,102s/^/M /p;199,$s/^/B /p'
T 1
T 2
M 99
M 100
M 1901
M 1902
B 1999
B 2000
BUG (limit): Don't use this with 0 as argument!
headTail 0 3 < <(seq 1 2000)
1
1998
1999
2000
headTail 3 0 < <(seq 1 2000)
1
2
3
1999
2000
BUG (limit): because of max line length:
headTail 4 32762 <<<Foo\ bar
bash: /bin/sed: Argument list too long
For both this to be supported, function would become:
head + tail lines, using one fork to sed
headTail() {
local hln=${1:-10} tln=${2:-10} str sedcmd=''
((hln>0)) && sedcmd+="1,${hln}{p;\$q};"
if ((tln>0)) ;then
printf -v str '%*s' $((tln-1)) ''
sedcmd+="$((hln+1)){${str// /\$!N;}};:a;\$!{N;D;ba};p;"
fi
sed -nf <(echo "$sedcmd")
}
Then
headTail 3 4 < <(seq 1 1000) |xargs
1 2 3 997 998 999 1000
headTail 3 0 < <(seq 1 1000) |xargs
1 2 3
headTail 0 4 < <(seq 1 1000) |xargs
997 998 999 1000
for i in {6..9};do printf " %3d: " $i;headTail 3 4 < <(seq 1 $i) |xargs; done
6: 1 2 3 4 5 6
7: 1 2 3 4 5 6 7
8: 1 2 3 5 6 7 8
9: 1 2 3 6 7 8 9
Stronger test: With bigger numbers: Reading 500'000 first and 500'000 last lines from an input of 3'000'000 lines:
headTail 500000 500000 < <(seq 1 3000000) | sed -ne '499999,500002p'
499999
500000
2500001
2500002
headTail 5000000 5000000 < <(seq 1 30000000) | sed -ne '4999999,5000002p'
4999999
5000000
25000001
25000002
$ perl -E '($s, $x, $y) = #ARGV; substr $s, $x, -$y, ""; say $s' abcdefgh 2 3
abfgh
The four argument variant of substr replaces the given portion of the string with the last argument. Here, we replace from position $x to position -$y (negative numbers count from the end of the string), and use an empty string as replacement, i.e. we remove the middle part.
Related
I am using bash in order to process software responses on-the-fly and I am looking for a way to find the
index of the maximum element in the array.
The data that gets fed to the bash script is like this:
25 9
72 0
3 3
0 4
0 7
And so I create two arrays. There is
arr1 = [ 25 72 3 0 0 ]
arr2 = [ 9 0 3 4 7 ]
And what I need is to find the index of the maximum number in arr1 in order to use it also for arr2.
But I would like to see if there is a quick - optimal way to do this.
Would it maybe be better to use a dictionary structure [key][value] with the data I have? Would this make the process easier?
I have also found [1] (from user jhnc) but I don't quite think it is what I want.
My brute - force approach is the following:
function MAX {
arr1=( 25 72 3 0 0 )
arr2=( 9 0 3 4 7 )
local indx=0
local max=${arr1[0]}
local flag
for ((i=1; i<${#arr1[#]};i++)); do
#To avoid invalid arithmetic operators when items are floats/doubles
flag=$( python <<< "print(${arr1$[${i}]} > ${max})")
if [ $flag == "True" ]; then
indx=${i}
max=${arr1[${i}]}
fi
done
echo "MAX:INDEX = ${max}:${indx}"
echo "${arr1[${indx}]}"
echo "${arr2[${indx}]}"
}
This approach obviously will work, BUT, is it the optimal one? Is there a faster way to perform the task?
arr1 = [ 99.97 0.01 0.01 0.01 0 ]
arr2 = [ 0 6 4 3 2 ]
In this example, if an array contains floats then I would get a
syntax error: invalid arithmetic operator (error token is ".97)
So, I am using
flag=$( python <<< "print(${arr1$[${i}]} > ${max})")
In order to overcome this issue.
Finding a maximum is inherently an O(n) operation. But there's no need to spawn a Python process on each iteration to perform the comparison. Write a single awk script instead.
awk 'BEGIN {
split(ARGV[1], a1);
split(ARGV[2], a2);
max=a1[1];
indx=1;
for (i in a1) {
if (a1[i] > max) {
indx = i;
max = a1[i];
}
}
print "MAX:INDEX = " max ":" (indx - 1)
print a1[indx]
print a2[indx]
}' "${arr1[*]}" "${arr2[*]}"
The two shell arrays are passed as space-separated strings to awk, which splits them back into awk arrays.
It's difficult to do it efficiently if you really do need to compare floats. Bash can't do floats, which means invoking an external program for every number comparison. However, comparing every number in bash, is not necessarily needed.
Here is a fast, pure bash, integer only solution, using comparison:
#!/bin/bash
arr1=( 25 72 3 0 0)
arr2=( 9 0 3 4 7)
# Get the maximum, and also save its index(es)
for i in "${!arr1[#]}"; do
if ((arr1[i]>arr1_max)); then
arr1_max=${arr1[i]}
max_indexes=($i)
elif [[ "${arr1[i]}" == "$arr1_max" ]]; then
max_indexes+=($i)
fi
done
# Print the results
printf '%s\n' \
"Array1 max is $arr1_max" \
"The index(s) of the maximum are:" \
"${max_indexes[#]}" \
"The corresponding values from array 2 are:"
for i in "${max_indexes[#]}"; do
echo "${arr2[i]}"
done
Here is another optimal method, that can handle floats. Comparison in bash is avoided altogether. Instead the much faster sort(1) is used, and is only needed once. Rather than starting a new python instance for every number.
#!/bin/bash
arr1=( 25 72 3 0 0)
arr2=( 9 0 3 4 7)
arr1_max=$(printf '%s\n' "${arr1[#]}" | sort -n | tail -1)
for i in "${!arr1[#]}"; do
[[ "${arr1[i]}" == "$arr1_max" ]] &&
max_indexes+=($i)
done
# Print the results
printf '%s\n' \
"Array 1 max is $arr1_max" \
"The index(s) of the maximum are:" \
"${max_indexes[#]}" \
"The corresponding values from array 2 are:"
for i in "${max_indexes[#]}"; do
echo "${arr2[i]}"
done
Example output:
Array 1 max is 72
The index(s) of the maximum are:
1
The corresponding values from array 2 are:
0
Unless you need those arrays, you can also feed your input script directly in to something like this:
#!/bin/bash
input-script |
sort -nr |
awk '
(NR==1) {print "Max: "$1"\nCorresponding numbers:"; max = $1}
{if (max == $1) print $2; else exit}'
Example (with some extra numbers):
$ echo \
'25 9
72 0
72 11
72 4
3 3
3 14
0 4
0 1
0 7' |
sort -nr |
awk '(NR==1) {max = $1; print "Max: "$1"\nCorresponding numbers:"}
{if (max == $1) print $2; else exit}'
Max: 72
Corresponding numbers:
4
11
0
You can also do it 100% in awk, including sorting:
$ echo \
'25 9
72 0
72 11
72 4
3 3
3 14
0 4
0 1
0 7' |
awk '
{
col1[a++] = $1
line[a-1] = $0
}
END {
asort(col1)
col1_max = col1[a-1]
print "Max is "col1_max"\nCorresponding numbers are:"
for (i in line) {
if (line[i] ~ col1_max"\\s") {
split(line[i], max_line)
print max_line[2]
}
}
}'
Max is 72
Corresponding numbers are:
0
11
4
Or, just to get the maximum of column 1, and any single number from column 2, that corresponds with it. As simply as possible:
$ echo \
'25 9
72 0
3 3
0 4
0 7' |
sort -nr |
head -1
72 0
I'm working on a shell script right now. I need to loop through a text file, grab the text from it, and find the average number, max number and min number from each line of numbers then print them in a chart with the name of each line. This is the text file:
Experiment1 9 8 1 2 9 0 2 3 4 5
collect1 83 39 84 2 1 3 0 9
jump1 82 -1 9 26 8 9
exp2 22 0 7 1 0 7 3 2
jump2 88 7 6 5
taker1 5 5 44 2 3
so far all I can do is loop through it and print each line like so:
#!/bin/bash
while read line
do
echo $line
done < mystats.txt
I'm a beginner and nothing I've found online has helped me.
One way, using perl for all the calculations:
$ perl -MList::Util=min,max,sum -anE 'BEGIN { say "Name\tAvg\tMin\tMax" }
$n = shift #F; say join("\t", $n, sum(#F)/#F, min(#F), max(#F))' mystats.txt
Name Avg Min Max
Experiment1 4.3 0 9
collect1 27.625 0 84
jump1 22.1666666666667 -1 82
exp2 5.25 0 22
jump2 26.5 5 88
taker1 11.8 2 44
It uses autosplit mode (-a) to split each line into an array (Much like awk), and the standard List::Util module's math functions to calculate the mean, min, and max of each line's numbers.
And here's a pure bash version using nothing but builtins (Though I don't recommend doing this; among other things bash doesn't do floating point math, so the averages are off):
#!/usr/bin/env bash
printf "Name\tAvg\tMin\tMax\n"
while read name nums; do
read -a numarr <<< "$nums"
total=0
min=${numarr[0]}
max=${numarr[0]}
for n in "${numarr[#]}"; do
(( total += n ))
if [[ $n -lt $min ]]; then
min=$n
fi
if [[ $n -gt $max ]]; then
max=$n
fi
done
(( avg = total / ${#numarr[*]} ))
printf "%s\t%d\t%d\t%d\n" "$name" "$avg" "$min" "$max"
done < mystats.txt
Using awk:
awk '{
min = $2; max = $2; sum = $2;
for (i=3; i<=NF; i++) {
if (min > $i) min = $i;
if (max < $i) max = $i;
sum+=$i }
printf "for %-20s min=%10i max=%10i avg=%10.3f\n", $1, min, max, sum/(NF-1) }' mystats.txt
How to split file by percentage of no. of lines?
Let's say I want to split my file into 3 portions (60%/20%/20% parts), I could do this manually, -_- :
$ wc -l brown.txt
57339 brown.txt
$ bc <<< "57339 / 10 * 6"
34398
$ bc <<< "57339 / 10 * 2"
11466
$ bc <<< "34398 + 11466"
45864
bc <<< "34398 + 11466 + 11475"
57339
$ head -n 34398 brown.txt > part1.txt
$ sed -n 34399,45864p brown.txt > part2.txt
$ sed -n 45865,57339p brown.txt > part3.txt
$ wc -l part*.txt
34398 part1.txt
11466 part2.txt
11475 part3.txt
57339 total
But I'm sure there's a better way!
There is a utility that takes as arguments the line numbers that should become the first of each respective new file: csplit. This is a wrapper around its POSIX version:
#!/bin/bash
usage () {
printf '%s\n' "${0##*/} [-ks] [-f prefix] [-n number] file arg1..." >&2
}
# Collect csplit options
while getopts "ksf:n:" opt; do
case "$opt" in
k|s) args+=(-"$opt") ;; # k: no remove on error, s: silent
f|n) args+=(-"$opt" "$OPTARG") ;; # f: filename prefix, n: digits in number
*) usage; exit 1 ;;
esac
done
shift $(( OPTIND - 1 ))
fname=$1
shift
ratios=("$#")
len=$(wc -l < "$fname")
# Sum of ratios and array of cumulative ratios
for ratio in "${ratios[#]}"; do
(( total += ratio ))
cumsums+=("$total")
done
# Don't need the last element
unset cumsums[-1]
# Array of numbers of first line in each split file
for sum in "${cumsums[#]}"; do
linenums+=( $(( sum * len / total + 1 )) )
done
csplit "${args[#]}" "$fname" "${linenums[#]}"
After the name of the file to split up, it takes the ratios for the sizes of the split files relative to their sum, i.e.,
percsplit brown.txt 60 20 20
percsplit brown.txt 6 2 2
percsplit brown.txt 3 1 1
are all equivalent.
Usage similar to the case in the question is as follows:
$ percsplit -s -f part -n 1 brown.txt 60 20 20
$ wc -l part*
34403 part0
11468 part1
11468 part2
57339 total
Numbering starts with zero, though, and there is no txt extension. The GNU version supports a --suffix-format option that would allow for .txt extension and which could be added to the accepted arguments, but that would require something more elaborate than getopts to parse them.
This solution plays nice with very short files (split file of two lines into two) and the heavy lifting is done by csplit itself.
$ cat file
a
b
c
d
e
$ cat tst.awk
BEGIN {
split(pcts,p)
nrs[1]
for (i=1; i in p; i++) {
pct += p[i]
nrs[int(size * pct / 100) + 1]
}
}
NR in nrs{ close(out); out = "part" ++fileNr ".txt" }
{ print $0 " > " out }
$ awk -v size=$(wc -l < file) -v pcts="60 20 20" -f tst.awk file
a > part1.txt
b > part1.txt
c > part1.txt
d > part2.txt
e > part3.txt
Change the " > " to just > to actually write to the output files.
Usage
The following bash script allows you to specify the percentage like
./split.sh brown.txt 60 20 20
you also can use the placeholder . which fills the percentage up to 100%.
./split.sh brown.txt 60 20 .
the splitted file is written to
part1-brown.txt
part2-brown.txt
part3-brown.txt
The script always generates as much part files as numbers are specified.
If the percentages sum up to 100, cat part* will always generate the original file (no duplicated or missing lines).
Bash Script: split.sh
#! /bin/bash
file="$1"
fileLength=$(wc -l < "$file")
shift
part=1
percentSum=0
currentLine=1
for percent in "$#"; do
[ "$percent" == "." ] && ((percent = 100 - percentSum))
((percentSum += percent))
if ((percent < 0 || percentSum > 100)); then
echo "invalid percentage" 1>&2
exit 1
fi
((nextLine = fileLength * percentSum / 100))
if ((nextLine < currentLine)); then
printf "" # create empty file
else
sed -n "$currentLine,$nextLine"p "$file"
fi > "part$part-$file"
((currentLine = nextLine + 1))
((part++))
done
BEGIN {
split(w, weight)
total = 0
for (i in weight) {
weight[i] += total
total = weight[i]
}
}
FNR == 1 {
if (NR!=1) {
write_partitioned_files(weight,a)
split("",a,":") #empty a portably
}
name=FILENAME
}
{a[FNR]=$0}
END {
write_partitioned_files(weight,a)
}
function write_partitioned_files(weight, a) {
split("",threshold,":")
size = length(a)
for (i in weight){
threshold[length(threshold)] = int((size * weight[i] / total)+0.5)+1
}
l=1
part=0
for (i in threshold) {
close(out)
out = name ".part" ++part
for (;l<threshold[i];l++) {
print a[l] " > " out
}
}
}
Invoke as:
awk -v w="60 20 20" -f above_script.awk file_to_split1 file_to_split2 ...
Replace " > " with > in script to actually write partitioned files.
The variable w expects space separated numbers. Files are partitioned in that proportion. For example "2 1 1 3" will partition files into four with number of lines in proportion of 2:1:1:3. Any sequence of numbers adding up to 100 can be used as percentages.
For large files the array a may consume too much memory. If that is an issue, here is an alternative awk script:
BEGIN {
split(w, weight)
for (i in weight) {
total += weight[i]; weight[i] = total #cumulative sum
}
}
FNR == 1 {
#get number of lines. take care of single quotes in filename.
name = gensub("'", "'\"'\"'", "g", FILENAME)
"wc -l '" name "'" | getline size
split("", threshold, ":")
for (i in weight){
threshold[length(threshold)+1] = int((size * weight[i] / total)+0.5)+1
}
part=1; close(out); out = FILENAME ".part" part
}
{
if(FNR>=threshold[part]) {
close(out); out = FILENAME ".part" ++part
}
print $0 " > " out
}
This passes through each file twice. Once for counting lines (via wc -l) and the other time while writing partitioned files. Invocation and effect is similar to the first method.
i like Benjamin W.'s csplit solution, but it's so long...
#!/bin/bash
# usage ./splitpercs.sh file 60 20 20
n=`wc -l <"$1"` || exit 1
echo $* | tr ' ' '\n' | tail -n+2 | head -n`expr $# - 1` |
awk -v n=$n 'BEGIN{r=1} {r+=n*$0/100; if(r > 1 && r < n){printf "%d\n",r}}' |
uniq | xargs csplit -sfpart "$1"
(the if(r > 1 && r < n) and uniq bits are to prevent creating empty files or strange behavior for small percentages, files with small numbers of lines, or percentages that add to over 100.)
I just followed your lead and made what you do manually into a script. It may not be the fastest or "best", but if you understand what you are doing now and can just "scriptify" it, you may be better off should you need to maintain it.
#!/bin/bash
# thisScript.sh yourfile.txt 20 50 10 20
YOURFILE=$1
shift
# changed to cat | wc so I dont have to remove the filename which comes from
# wc -l
LINES=$(cat $YOURFILE | wc -l )
startpct=0;
PART=1;
for pct in $#
do
# I am assuming that each parameter is on top of the last
# so 10 30 10 would become 10, 10+30 = 40, 10+30+10 = 50, ...
endpct=$( echo "$startpct + $pct" | bc)
# your math but changed parts of 100 instead of parts of 10.
# change bc <<< to echo "..." | bc
# so that one can capture the output into a bash variable.
FIRSTLINE=$( echo "$LINES * $startpct / 100 + 1" | bc )
LASTLINE=$( echo "$LINES * $endpct / 100" | bc )
# use sed every time because the special case for head
# doesn't really help performance.
sed -n $FIRSTLINE,${LASTLINE}p $YOURFILE > part${PART}.txt
$((PART++))
startpct=$endpct
done
# get the rest if the % dont add to 100%
if [[ $( "lastpct < 100" | bc ) -gt 0 ]] ; then
sed -n $FIRSTLINE,${LASTLINE}p $YOURFILE > part${PART}.txt
fi
wc -l part*.txt
I have a file name numbers, simply contain bunch random numbers
1 2 3
7 5 9
2 2 9
5 4 5
7 2 6
I have to create a script that find the median for each row, and here is my code:
while read -a row
do
for i in "${row[#]}"
do
length=`expr ${#row[#]} % 2`
if [ $length -ne 0 ] ; then
mid=`expr ${#row[#]} / 2`
echo ${row[middle]}
elif [ $length -eq 0 ] ; then
val1=`expr ${#row[#]} / 2`
val2=`expr (${$row[#]} / 2) + 1`
mid=`expr ($val1 + $val2) / 2`
echo $mid
done | sort -n
done < numbers
However this doesn't work, it shows error instead. What mistake did I do in this code? Also I still haven't figure out where is the proper way to place the sort -n since it needs to be sorted first before calculate the median, right?
Bash can only do integer arithmetic, you need a tool like bc to compute the average:
#!/bin/bash
while read -a n ; do
n=($(IFS=$'\n' ; echo "${n[*]}" | sort -n))
len=${#n[#]}
if (( len % 2 )) ; then
echo ${n[ len / 2 ]}
else
bc -l <<< "scale=1; (${n[ len / 2 - 1 ]} + ${n[ len / 2 ]}) / 2"
fi
done
I'd probably reach for a higher level language, e.g. Perl:
#!/usr/bin/perl
use warnings;
use strict;
while (<>) {
my #n = sort { $a <=> $b } split;
print #n % 2 ? $n[ #n / 2 ]
: ($n[ #n / 2 - 1 ] + $n[ #n / 2 ]) / 2,
"\n";
}
I just had to awk it, for the fun of it.
Notice I don't use an if but fractions of indexes.
awk '{
split($0,a) # create array a from input line
asort(a,b) # sort array into array b (gnu awk specific)
# add twice the median, or around the median and divide by 2
print ( b[int(NF/2+0.7)] + b[int(NF/2+1.2)] )/2
}' numbers
Shortened (67 chars):
awk '{split($0,a);asort(a,b);print(b[int(NF/2+0.7)]+b[int(NF/2+1.2)])/2}' numbers
66 chars golf :-)
awk '{split($0,a);asort(a,b);$0=(b[int(NF/2+0.7)]+b[int(NF/2+1.2)])/2}1' numbers
I am writing a script that finds the minimum value in a string. The string is given to me with a cat <file> and then I parse each number inside that string. The string only contains a set of numbers that is separated by spaced.
This is the code:
echo $FREQUENCIES
for freq in $FREQUENCIES
do
echo "Freq: $freq"
if [ -z "$MINFREQ" ]
then
MINFREQ=$freq
echo "Assigning MINFREQ for the first time with $freq"
elif [ $MINFREQ -gt $freq ]
then
MINFREQ=$freq
echo "Replacing MINFREQ with $freq"
fi
done
Here is the output I get:
800000 700000 600000 550000 500000 250000 125000
Freq: 800000
Assigning MINFREQ for the first time with 800000
Freq: 700000
Replacing MINFREQ with 700000
Freq: 600000
Replacing MINFREQ with 600000
Freq: 550000
Replacing MINFREQ with 550000
Freq: 500000
Replacing MINFREQ with 500000
Freq: 250000
Replacing MINFREQ with 250000
Freq: 125000
Replacing MINFREQ with 125000
Freq:
: integer expression expected
The problem is that the last line, for some reason, is empty or contain white spaces (I am not sure why). I tried testing if the variable was set: if [ -n "$freq" ] but this test doesn't seem to work fine here, it still goes through the if statement for the last line.
Could someone please help me figure out why the last time the loop executes, $freq is set to empty or whitespace and how to avoid this please?
EDIT:
using od -c feeded with echo "<<$freq>>"
0000000 < < 8 0 0 0 0 0 > > \n
0000013
0000000 < < 7 0 0 0 0 0 > > \n
0000013
0000000 < < 6 0 0 0 0 0 > > \n
0000013
0000000 < < 5 5 0 0 0 0 > > \n
0000013
0000000 < < 5 0 0 0 0 0 > > \n
0000013
0000000 < < 2 5 0 0 0 0 > > \n
0000013
0000000 < < 1 2 5 0 0 0 > > \n
0000013
0000000 < < \r > > \n
0000006
There seems to be an extra \r (from the file).
Thank you very much!
If you're only working with integer values, you can validate your string using regex:
elif [[ $freq =~ ^[0-9]+$ && $MINFREQ -gt $freq ]]
For the error problem: you might have some extra white space in $FREQUENCIES?
Another solution with awk
echo $FREQUENCIES | awk '{min=$1;for (i=1;i++;i<=NF) {if ( $i<min ) { min=$i } } ; print min }'
If it's a really long variable, you can go with:
echo $FREQUENCIES | awk -v RS=" " 'NR==1 {min=$0} {if ( $0<min ) { min=$0 } } END {print min }'
(It sets the record separator to space, then on the very first record sets the min to the value, then for every record check if it's smaller than min and finally prints it.
HTH
If you are using bash you have arithmetic expressions and the "if unset: use value and assign" parameter substitution:
#!/bin/bash
for freq in "$#"; do
(( minfreq = freq < ${minfreq:=freq} ? freq : minfreq ))
done
echo $minfreq
use:
./script 800000 700000 600000 550000 500000 250000 125000
Data :
10,
10.2,
-3,
3.8,
3.4,
12
Minimum :
echo -e "10\n10.2\n-3\n3.8\n3.4\n12" | sort -n | head -1
Output: -3
Maximum :
echo -e "10\n10.2\n-3\n3.8\n3.4\n12" | sort -nr | head -1
Output: 12
How ? : 1. Print line by line 2. sort for numbers (Reverse for getting maximum)3. print first line alone.Simple !!
This may not be a good method. But easy for learners. I am sure.
echo $FREQUENCIES | awk '{for (;NF-1;NF--) if ($1>$NF) $1=$NF} 1'
compare first and last field
set first field to the smaller of the two
remove last field
once one field remains, print
Example