Sorting on Same Line Bash - bash

Hello I am trying to sort a set of numeric command line arguments and then echo them back out in reverse numeric order on the same line with a space between each. I have this loop:
for var in "$#"
do
echo -n "$var "
done | sort -rn
However when I added the -n to the echo the sort command stops working. I am trying to do this without using printf. Using the echo -n they do not sort and simply print in the order they were entered.

You can do it like this:
a=( $# )
b=( $(printf "%s\n" ${a[#]} | sort -rn) )
printf "%s\n" ${b[#]}
# b is reverse sorted nuemrically now

man sort would tell you:
sort - sort lines of text files
So you can transform the result into the desired format after sorting.
In order to achieve the desired result, you can say:
for var in "$#"
do
echo "$var"
done | sort -rn | paste -sd' '

Maybe that's because sort is "line-oriented", so you need every number on a separate line, which is not the case using -n with echo.
You could simply put the sorted numbers back in one line using sed, like that:
for var in "$#";
do
echo "$var ";
done | sort -rn | sed -e ':a;N;s/\n/ /;ba'

sort is used to sort multiple lines of text. Using the option -n of echo, you are printing everything in one line.
If you want the output to be sorted, you have to print it in multiple lines :
for var in "$#"
do
echo $var
done | sort -rn
If you want the result on only one line you could do :
echo $(for var in "$#"; do echo $var; done | sort -rn)

One trick is to play with the IFS:
IFS=$'\n'
set "$*"
IFS=$' \n'
set $(sort -rn <<< "$*")
echo $*
This is the same idea but easier to read with the join() function:
join() {
IFS=$1
shift
echo "$*"
}
join ' ' $(join $'\n' $* | sort -nr)

No loops required:
#!/bin/bash
sorted=( $(sort -rn < <(printf '%s\n' $#)) )
echo ${sorted[#]}

Sorting numbers in single line either comma, or space seperated, use the below
echo "12,12,3,55,567,23,6,9,35,423"|sed -e 's;[ |,];\n;g'|sort -n|xargs|sed -e 's; ;,;g'
if your output does not need comma, skip the sed after xargs

Related

Find the occurrences of an element in array

arr=(7793 7793123 7793 37793 3214)
I'd like to find the occurrence of 7793. I tried: grep -o '7793' <<< $arr | wc -l
However, this also counts other elements that contain 7793 (e.g. 7793123, 37793)
printf '%s\n' "${arr[#]}" | grep -c '^7793$'
Explanation:
printf prints each item of the array on a new line
grep -c '^7793$' uses the start and end anchors to match 7793 exactly and outputs the count
With GNU grep (note the correct counting of elements containing newlines, refer to documentation for a description of options used):
arr=(7793 7793123 7793 37793 3214 7793$'\n'7793)
printf '%s\0' "${arr[#]}" | grep --null-data -cFxe 7793
Output:
2
This works because variables in bash cannot contain the NUL character.
You can use regex in this case
grep -e ^7793$
To make a bash script efficient (from CPU/memory consumption point of view), whenever possible, avoid running sub-shells and programs. Hence, instead of using grep or any other program, here we have the choice of using a simple loop with variable comparison and arithmetic:
#!/bin/bash
key=7793
arr=(7793 7793123 7793 37793 3214)
count=0
for i in "${arr[#]}"
do if [ "$i" = "$key" ]
then count=$((count+1))
fi
done
echo $count

How to filter an ordered list stored into a string

Is it possible in bash to filter out a part of a string with another given string ?
I have a fixed list of motifs defined in a string. The order IS important and I want to keep only the parts that are passed as a parameter ?
myDefaultList="s,t,a,c,k" #order is important
toRetains="k,t,c,u" #provided by the user, order is not enforced
retained=filter $myDefaultList $toRetains # code to filter
echo $retained # will print t,c,k"
I can write an ugly method that will use IFS, arrays and loops, but I wonder if there's a 'clever' way to do that, using built-in commands ?
here is another approach
tolines() { echo $1 | tr ',' '\n'; }
grep -f <(tolines "$toRetains") <(tolines "$myDefaultList") | paste -sd,
will print
t,c,k
assign to a variable as usual.
Since you mention in your comments that you are open to sed/awk , check also this with GNU awk:
$ echo "$a"
s,t,a,c,k
$ echo "$b"
k,t,c,u
$ awk -v RS=",|\n" 'NR==FNR{a[$1];next}$1 in a{printf("%s%s",$1,RT)}' <(echo "$b") <(echo "$a")
t,c,k
#!/bin/bash
myDefaultList="s,t,a,c,k"
toRetains="s,t,c,u"
IFS=","
for i in $myDefaultList
do
echo $toRetains | grep $i > /dev/null
if [ "$?" -eq "0" ]
then
retained=$retained" "$i
fi
done
echo $retained | sed -e 's/ /,/g' -e 's/,//1'
I have checked it running for me. Kindly check.

How can I 'echo' out things without a newline?

I have the following code:
for x in "${array[#]}"
do
echo "$x"
done
The results are something like this (I sort these later in some cases):
1
2
3
4
5
Is there a way to print it as 1 2 3 4 5 instead? Without adding a newline every time?
Yes. Use the -n option:
echo -n "$x"
From help echo:
-n do not append a newline
This would strips off the last newline too, so if you want you can add a final newline after the loop:
for ...; do ...; done; echo
Note:
This is not portable among various implementations of echo builtin/external executable. The portable way would be to use printf instead:
printf '%s' "$x"
printf '%s\n' "${array[#]}" | sort | tr '\n' ' '
printf '%s\n' -- more robust than echo and you want the newlines here for sort's sake
"${array[#]}" -- quotes unnecessary for your particular array, but good practice as you don't generally want word-spliting and glob expansions there
You don't need a for loop to sort numbers from an array.
Use process substitution like this:
sort <(printf "%s\n" "${array[#]}")
To remove new lines, use:
sort <(printf "%s\n" "${array[#]}") | tr '\n' ' '
You can also do it this way:
array=(1 2 3 4 5)
echo "${array[#]}"
If, for whatever reason, -n doesn't fix this for you, you can also add \c to the end of the thing to be echo'd:
echo "$x\c"

Weird bash results using cut

I am trying to run this command:
./smstocurl SLASH2.911325850268888.911325850268896
smstocurl script:
#SLASH2.911325850268888.911325850268896
model=$(echo \&model=$1 | cut -d'.' -f 1)
echo $model
imea1=$(echo \&simImea1=$1 | cut -d'.' -f 2)
echo $imea1
imea2=$(echo \&simImea2=$1 | cut -d'.' -f 3)
echo $imea2
echo $model$imea1$imea2
Result Received
&model=SLASH2911325850268888911325850268896
Result Expected
&model=SLASH2&simImea1=911325850268888&simImea2=911325850268896
What am I missing here ?
You are cutting based on the dot .. In the first case your desired string contains the first string, the one containing &model, so then it is printed.
However, in the other cases you get the 2nd and 3rd blocks (-f2, -f3), so that the imea text gets cutted off.
Instead, I would use something like this:
while IFS="." read -r model imea1 imea2
do
printf "&model=%s&simImea1=%s&simImea2=%s\n" $model $imea1 $imea2
done <<< "$1"
Note the usage of printf and variables to have more control about what we are writing. Using a lot of escapes like in your echos can be risky.
Test
while IFS="." read -r model imea1 imea2; do printf "&model=%s&simImea1=%s&simImea2=%s\n" $model $imea1 $imea2
done <<< "SLASH2.911325850268888.911325850268896"
Returns:
&model=SLASH2&simImea1=911325850268888&simImea2=911325850268896
Alternatively, this sed makes it:
sed -r 's/^([^.]*)\.([^.]*)\.([^.]*)$/\&model=\1\&simImea1=\2\&simImea2=\3/' <<< "$1"
by catching each block of words separated by dots and printing back.
You can also use this way
Run:
./program SLASH2.911325850268888.911325850268896
Script:
#!/bin/bash
String=`echo $1 | sed "s/\./\&simImea1=/"`
String=`echo $String | sed "s/\./\&simImea2=/"`
echo "&model=$String
Output:
&model=SLASH2&simImea1=911325850268888&simImea2=911325850268896
awk way
awk -F. '{print "&model="$1"&simImea1="$2"&simImea2="$3}' <<< "SLASH2.911325850268888.911325850268896"
or
awk -F. '$0="&model="$1"&simImea1="$2"&simImea2="$3' <<< "SLASH2.911325850268888.911325850268896"
output
&model=SLASH2&simImea1=911325850268888&simImea2=911325850268896

Randomizing arg order for a bash for statement

I have a bash script that processes all of the files in a directory using a loop like
for i in *.txt
do
ops.....
done
There are thousands of files and they are always processed in alphanumerical order because of '*.txt' expansion.
Is there a simple way to random the order and still insure that I process all of the files only once?
Assuming the filenames do not have spaces, just substitute the output of List::Util::shuffle.
for i in `perl -MList::Util=shuffle -e'$,=$";print shuffle<*.txt>'`; do
....
done
If filenames do have spaces but don't have embedded newlines or backslashes, read a line at a time.
perl -MList::Util=shuffle -le'$,=$\;print shuffle<*.txt>' | while read i; do
....
done
To be completely safe in Bash, use NUL-terminated strings.
perl -MList::Util=shuffle -0 -le'$,=$\;print shuffle<*.txt>' |
while read -r -d '' i; do
....
done
Not very efficient, but it is possible to do this in pure Bash if desired. sort -R does something like this, internally.
declare -a a # create an integer-indexed associative array
for i in *.txt; do
j=$RANDOM # find an unused slot
while [[ -n ${a[$j]} ]]; do
j=$RANDOM
done
a[$j]=$i # fill that slot
done
for i in "${a[#]}"; do # iterate in index order (which is random)
....
done
Or use a traditional Fisher-Yates shuffle.
a=(*.txt)
for ((i=${#a[*]}; i>1; i--)); do
j=$[RANDOM%i]
tmp=${a[$j]}
a[$j]=${a[$[i-1]]}
a[$[i-1]]=$tmp
done
for i in "${a[#]}"; do
....
done
You could pipe your filenames through the sort command:
ls | sort --random-sort | xargs ....
Here's an answer that relies on very basic functions within awk so it should be portable between unices.
ls -1 | awk '{print rand()*100, $0}' | sort -n | awk '{print $2}'
EDIT:
ephemient makes a good point that the above is not space-safe. Here's a version that is:
ls -1 | awk '{print rand()*100, $0}' | sort -n | sed 's/[0-9\.]* //'
If you have GNU coreutils, you can use shuf:
while read -d '' f
do
# some stuff with $f
done < <(shuf -ze *)
This will work with files with spaces or newlines in their names.
Off-topic Edit:
To illustrate SiegeX's point in the comment:
$ a=42; echo "Don't Panic" | while read line; do echo $line; echo $a; a=0; echo $a; done; echo $a
Don't Panic
42
0
42
$ a=42; while read line; do echo $line; echo $a; a=0; echo $a; done < <(echo "Don't Panic"); echo $a
Don't Panic
42
0
0
The pipe causes the while to be executed in a subshell and so changes to variables in the child don't flow back to the parent.
Here's a solution with standard unix commands:
for i in $(ls); do echo $RANDOM-$i; done | sort | cut -d- -f 2-
Here's a Python solution, if its available on your system
import glob
import random
files = glob.glob("*.txt")
if files:
for file in random.shuffle(files):
print file

Resources