Cut unix variable - bash

I have the following at the moment:
for file in *
do
list="$list""$file "`cat $file | wc -l | sort -k1`$'\n'
done
echo "$list"
This is printing:
fileA 10
fileB 20
fileC 30
I would then like to cycle through $list and cut column 2 and perform calculations.
When I do:
for line in "$list"
do
noOfLinesInFile=`echo "$line" | cut -d\ -f2`
echo "$noOfLinesInFile"
done
It prints:
10
20
30
BUT, the for loop is only being entered once. In this example, it should be entering the loop 3 times.
Can someone please tell me what I should do here to achieve this?

If you quote the variable
for line in "$list"
there is only one word, so the loop is executed just once.
Without quotes, $line would be populated with any word found in the $list, which is not what you want, either, as it would process the values one by one, not lines.
You can set the $IFS variable to newline to split $list on newlines:
IFS=$'\n'
for line in $list ; do
...
done
Don't forget to reset IFS to the original value - either put the whole part into a subshell (if no variables should survive the loop)
(
IFS=$'\n'
for ...
)
or backup the value:
IFS_=$IFS
IFS=$'\n'
for ...
IFS=$IFS_
...
done

This is because list in shell are just defined using space as a separator.
# list="a b c"
# for i in $list; do echo $i; done
a
b
c
# for i in "$list"; do echo $i; done
a b c
in your first loop, you actually are not building a list in shell sens.
You should setting other than default separators either for the loop, in the append, or in the cut...

Use arrays instead:
#!/bin/bash
files=()
linecounts=()
for file in *; do
files+=("$file")
linecounts+=("$(wc -l < "$file")")
done
for i in "${!files[#]}" ;do
echo "${linecounts[i]}"
printf '%s %s\n' "${files[i]}" "${linecounts[i]}" ## Another form.
done
Although it can be done simpler as printf '%s\n' "${linecounts[#]}".

wc -l will only output one value, so you don't need to sort it:
for file in *; do
list+="$file "$( wc -l < "$file" )$'\n'
done
echo "$list"
Then, you can use a while loop to read the list line-by-line:
while read file nlines; do
echo $nlines
done <<< "$list"
That while loop is fragile if any filename has spaces. This is a bit more robust:
while read -a words; do
echo ${words[-1]}
done <<< "$list"

Related

Loop through table and parse multiple arguments to scripts in Bash

I am in a situation similar to this one and having difficulties implementing this kind of solution for my situation.
I have file.tsv formatted as follows:
x y
dog woof
CAT meow
loud_goose honk-honk
duck quack
with a fixed number of columns (but variable rows) and I need to loop those pairs of values, all but the first one, in a script like the following (pseudocode)
for elements in list; do
./script1 elements[1] elements[2]
./script2 elements[1] elements[2]
done
so that script* can take the arguments from the pair and run with it.
Is there a way to do it in Bash?
I was thinking I could do something like this:
list1={`awk 'NR > 1{print $1}' file.tsv`}
list2={`awk 'NR > 1{print $2}' file.tsv`}
and then to call them in the loop based on their position, but I am not sure on how.
Thanks!
Shell tables are not multi-dimensional so table element cannot store two arguments for your scripts. However since you are processing lines from file.tsv, you can iterate on each line, reading both elements at once like this:
#!/usr/bin/env sh
# Populate tab with a tab character
tab="$(printf '\t')"
# Since printf's sub-shell added a trailing newline, remove it
tab="${tab%?}"
{
# Read first line in dummy variable _ to skip header
read -r _
# Iterate reading tab delimited x and y from each line
while IFS="$tab" read -r x y || [ -n "$x" ]; do
./script1 "$x" "$y"
./script2 "$x" "$y"
done
} < file.tsv # from this file
You could try just a while + read loop with the -a flag and IFS.
#!/usr/bin/env bash
while IFS=$' \t' read -ra line; do
echo ./script1 "${line[0]}" "${line[1]}"
echo ./script2 "${line[0]}" "${line[1]}"
done < <(tail -n +2 file.tsv)
Or without the tail
#!/usr/bin/env bash
skip=0 start=-1
while IFS=$' \t' read -ra line; do
if ((start++ >= skip)); then
echo ./script1 "${line[0]}" "${line[1]}"
echo ./script2 "${line[0]}" "${line[1]}"
fi
done < file.tsv
Remove the echo's if you're satisfied with the output.

sed DON'T remove extra whitespace

It seems everybody else wants to remove any additional whitespace, however I have the opposite problem.
I have a file, call it some_file.txt that looks like
a b c d
and some more
and I'm reading it line-by-line with sed,
num_lines=$(cat some_file.txt | wc -l)
for i in $(seq 1 $num_lines); do
echo $(sed "${i}q;d" $file)
string=$(sed "${i}q;d" $file)
echo $string
done
I would expect the number of whitespace characters to stay the same, however the output I get is
a b c d
a b c d
and some more
and some more
So it seems that the problem is with sed removing the extra whitespace between chars, anyway to fix this?
Have a look at this example:
$ echo Hello World
Hello World
$ echo "Hello World"
Hello World
sed is not your problem, your problem is that bash removes the whitespaces when passing the output of sed into echo.
You just need to surround whatever echo is supposed to print with double quotation marks. So instead of
echo $(sed "${i}q;d" $file)
echo $string
You write
echo "$(sed "${i}q;d" $file)"
echo "$string"
The new script should look like this:
#!/usr/bin/env bash
file=some_file.txt
num_lines=$(cat some_file.txt | wc -l)
for i in $(seq 1 $num_lines); do
echo "$(sed "${i}q;d" $file)"
string=$(sed "${i}q;d" $file)
echo "$string"
done
prints the correct output:
a b c d
a b c d
and some more
and some more
However, if you just want to go through your file line by line, I strongly recommend something like this:
while IFS= read -r line; do
echo "$line"
done < some_file.txt
Question from the comments: What to do if you only want 33 lines starting from line x. One possible solution is this:
#!/usr/bin/env bash
declare -i s=$1
declare -i e=${s}+32
sed -n "${s},${e}p" $file | while IFS= read -r line; do
echo "$line"
done
(Note that I would probably include some validation of $1 in there as well.)
I declare s and e as integer variables, then even bash can do some simple arithmetic on them and calculate the actual last line to print.

Loop through a comma-separated shell variable

Suppose I have a Unix shell variable as below
variable=abc,def,ghij
I want to extract all the values (abc, def and ghij) using a for loop and pass each value into a procedure.
The script should allow extracting arbitrary number of comma-separated values from $variable.
Not messing with IFS
Not calling external command
variable=abc,def,ghij
for i in ${variable//,/ }
do
# call your procedure/other scripts here below
echo "$i"
done
Using bash string manipulation http://www.tldp.org/LDP/abs/html/string-manipulation.html
You can use the following script to dynamically traverse through your variable, no matter how many fields it has as long as it is only comma separated.
variable=abc,def,ghij
for i in $(echo $variable | sed "s/,/ /g")
do
# call your procedure/other scripts here below
echo "$i"
done
Instead of the echo "$i" call above, between the do and done inside the for loop, you can invoke your procedure proc "$i".
Update: The above snippet works if the value of variable does not contain spaces. If you have such a requirement, please use one of the solutions that can change IFS and then parse your variable.
If you set a different field separator, you can directly use a for loop:
IFS=","
for v in $variable
do
# things with "$v" ...
done
You can also store the values in an array and then loop through it as indicated in How do I split a string on a delimiter in Bash?:
IFS=, read -ra values <<< "$variable"
for v in "${values[#]}"
do
# things with "$v"
done
Test
$ variable="abc,def,ghij"
$ IFS=","
$ for v in $variable
> do
> echo "var is $v"
> done
var is abc
var is def
var is ghij
You can find a broader approach in this solution to How to iterate through a comma-separated list and execute a command for each entry.
Examples on the second approach:
$ IFS=, read -ra vals <<< "abc,def,ghij"
$ printf "%s\n" "${vals[#]}"
abc
def
ghij
$ for v in "${vals[#]}"; do echo "$v --"; done
abc --
def --
ghij --
I think syntactically this is cleaner and also passes shell-check linting
variable=abc,def,ghij
for i in ${variable//,/ }
do
# call your procedure/other scripts here below
echo "$i"
done
#/bin/bash
TESTSTR="abc,def,ghij"
for i in $(echo $TESTSTR | tr ',' '\n')
do
echo $i
done
I prefer to use tr instead of sed, becouse sed have problems with special chars like \r \n in some cases.
other solution is to set IFS to certain separator
Another solution not using IFS and still preserving the spaces:
$ var="a bc,def,ghij"
$ while read line; do echo line="$line"; done < <(echo "$var" | tr ',' '\n')
line=a bc
line=def
line=ghij
Here is an alternative tr based solution that doesn't use echo, expressed as a one-liner.
for v in $(tr ',' '\n' <<< "$var") ; do something_with "$v" ; done
It feels tidier without echo but that is just my personal preference.
The following solution:
doesn't need to mess with IFS
doesn't need helper variables (like i in a for-loop)
should be easily extensible to work for multiple separators (with a bracket expression like [:,] in the patterns)
really splits only on the specified separator(s) and not - like some other solutions presented here on e.g. spaces too.
is POSIX compatible
doesn't suffer from any subtle issues that might arise when bash’s nocasematch is on and a separator that has lower/upper case versions is used in a match like with ${parameter/pattern/string} or case
beware that:
it does however work on the variable itself and pop each element from it - if that is not desired, a helper variable is needed
it assumes var to be set and would fail if it's not and set -u is in effect
while true; do
x="${var%%,*}"
echo $x
#x is not really needed here, one can of course directly use "${var%%:*}"
if [ -z "${var##*,*}" ] && [ -n "${var}" ]; then
var="${var#*,}"
else
break
fi
done
Beware that separators that would be special characters in patterns (e.g. a literal *) would need to be quoted accordingly.
Here's my pure bash solution that doesn't change IFS, and can take in a custom regex delimiter.
loop_custom_delimited() {
local list=$1
local delimiter=$2
local item
if [[ $delimiter != ' ' ]]; then
list=$(echo $list | sed 's/ /'`echo -e "\010"`'/g' | sed -E "s/$delimiter/ /g")
fi
for item in $list; do
item=$(echo $item | sed 's/'`echo -e "\010"`'/ /g')
echo "$item"
done
}
Try this one.
#/bin/bash
testpid="abc,def,ghij"
count=`echo $testpid | grep -o ',' | wc -l` # this is not a good way
count=`expr $count + 1`
while [ $count -gt 0 ] ; do
echo $testpid | cut -d ',' -f $i
count=`expr $count - 1 `
done

bash- Read an array then write from that point for some number of elements to a file

I need to search through an array then once I find what I'm looking, read that
element plus a couple more of the same array and write all to a file.
This is what I have so far
if [ -e "${EPH_DIR}" ]
then
i=0
while read line
do
FILE[$i]="$line"
i=$(($i+1))
done < ${EPH_DIR}
fi
for i in ${FILE[*]}
do
echo "$i"
if [[ $i == ${SAT} ]]
then
echo "Found it: $i"
fi
done
If you want to do it in a C-style for-loop:
for ((i=0; i < ${#FILES[#]}; i++)); do
if [[ ${FILES[i]} == $SAT ]]; then
printf "%s\n" "found it" "${FILES[i]}" "${FILES[i+1]}" "${FILES[i+2]}"
fi
done
Note that the array index in brackets is an arithmetic expression, so the dollar sign is not required.
You can use the built in grep switch -A:
-A NUM, --after-context=NUM
Print NUM lines of trailing context after matching lines.
Just pipe out the contents of the array into grep and use -A to give you however many extra lines you'd like printed. For example, use -A2 to print two additional lines after any line matching $STAT:
printf -- '%s\n' "${FILE[#]}" | grep -A2 "^$SAT$"
If you want this to go out to a new file, like the question's title suggests, then just redirect this to where you want:
printf -- '%s\n' "${FILE[#]}" | grep -A2 "^$SAT$" > /path/to/my/file
Since it looks like your array is just taking a file listing from the directory $EPH_DIR, instead of using a loop, you could put the listing into the array by doing:
FILE=( "$(ls $EPH_DIR)" )
Or, if this printing is the only thing you're using the array for, you can skip the array entirely and just send the directory listing directly into grep:
ls $EPH_DIR | grep -A2 "^$SAT$"

Randomizing arg order for a bash for statement

I have a bash script that processes all of the files in a directory using a loop like
for i in *.txt
do
ops.....
done
There are thousands of files and they are always processed in alphanumerical order because of '*.txt' expansion.
Is there a simple way to random the order and still insure that I process all of the files only once?
Assuming the filenames do not have spaces, just substitute the output of List::Util::shuffle.
for i in `perl -MList::Util=shuffle -e'$,=$";print shuffle<*.txt>'`; do
....
done
If filenames do have spaces but don't have embedded newlines or backslashes, read a line at a time.
perl -MList::Util=shuffle -le'$,=$\;print shuffle<*.txt>' | while read i; do
....
done
To be completely safe in Bash, use NUL-terminated strings.
perl -MList::Util=shuffle -0 -le'$,=$\;print shuffle<*.txt>' |
while read -r -d '' i; do
....
done
Not very efficient, but it is possible to do this in pure Bash if desired. sort -R does something like this, internally.
declare -a a # create an integer-indexed associative array
for i in *.txt; do
j=$RANDOM # find an unused slot
while [[ -n ${a[$j]} ]]; do
j=$RANDOM
done
a[$j]=$i # fill that slot
done
for i in "${a[#]}"; do # iterate in index order (which is random)
....
done
Or use a traditional Fisher-Yates shuffle.
a=(*.txt)
for ((i=${#a[*]}; i>1; i--)); do
j=$[RANDOM%i]
tmp=${a[$j]}
a[$j]=${a[$[i-1]]}
a[$[i-1]]=$tmp
done
for i in "${a[#]}"; do
....
done
You could pipe your filenames through the sort command:
ls | sort --random-sort | xargs ....
Here's an answer that relies on very basic functions within awk so it should be portable between unices.
ls -1 | awk '{print rand()*100, $0}' | sort -n | awk '{print $2}'
EDIT:
ephemient makes a good point that the above is not space-safe. Here's a version that is:
ls -1 | awk '{print rand()*100, $0}' | sort -n | sed 's/[0-9\.]* //'
If you have GNU coreutils, you can use shuf:
while read -d '' f
do
# some stuff with $f
done < <(shuf -ze *)
This will work with files with spaces or newlines in their names.
Off-topic Edit:
To illustrate SiegeX's point in the comment:
$ a=42; echo "Don't Panic" | while read line; do echo $line; echo $a; a=0; echo $a; done; echo $a
Don't Panic
42
0
42
$ a=42; while read line; do echo $line; echo $a; a=0; echo $a; done < <(echo "Don't Panic"); echo $a
Don't Panic
42
0
0
The pipe causes the while to be executed in a subshell and so changes to variables in the child don't flow back to the parent.
Here's a solution with standard unix commands:
for i in $(ls); do echo $RANDOM-$i; done | sort | cut -d- -f 2-
Here's a Python solution, if its available on your system
import glob
import random
files = glob.glob("*.txt")
if files:
for file in random.shuffle(files):
print file

Resources