Cannot recognize word if spaced by tab - bash

I have problem on that the program cannot read each word if words in text file is spaced by tab, not space.
For example, here is file.
part_Q.txt:
NWLR35MQ 649
HCDA93OW 526
abc 1
def 2
ghi 3
note that between "abc" and "1", there is a tab, not space.
Also note that between "NWLR35MQ" and "649", there is no tab but all are spaces. same for 2nd line as well.
Output:
NWLR35MQ
649
HCDA93OW
526
def
2
ghi
3
However, if I replace tab between "abc" and "1" by space in the file, then it outputs correctly like below,
Expected output:
NWLR35MQ
649
HCDA93OW
526
abc
1
def
2
ghi
3
It correctly display all words in file. How can I display all words regardless of tab or space? it should display all words in both cases. It seems that the program regards tab as a character.
Below is source code:
#!/bin/sh
tempCtr=0
realCtr=0
copyCtr=0
while IFS= read -r line || [[ -n $line ]]; do
IFS=' '
tempCtr=0
for word in $line; do
temp[$tempCtr]="$word"
let "tempCtr++"
done
# if there are exactly 2 fields in each line, store ID and quantity
if [ $tempCtr -eq 2 ]
then
part_Q[$realCtr]=${temp[$copyCtr]}
let "realCtr++"
let "copyCtr++"
part_Q[$realCtr]=${temp[$copyCtr]}
let "realCtr++"
copyCtr=0
fi
done < part_Q.txt
for value in "${part_Q[#]}"; do
echo $value
done

What are you trying to do? If outputting is your only goal, this can be achieved very easily:
$ cat <<EOF | sed -E 's/[[:blank:]]+/\n/'
NWLR35MQ 649
HCDA93OW 526
abc 1
def 2
ghi 3
EOF
NWLR35MQ
649
HCDA93OW
526
abc
1
def
2
ghi
3

Awk is faster than a loop, but here is how you can implement this with a loop:
realCtr=0
while read -r x1 x2 x3; do
if [ -n "${x2}" ] && [ -z "${x3}" ]; then
echo 2=$x2
part_Q[realCtr]="${x1}"
(( realCtr++ ))
part_Q[realCtr]="${x2}"
(( realCtr++ ))
fi
done < part_Q.txt
echo "Array (2 items each line):"
echo "${part_Q[#]}" | sed 's/[^ ]* [^ ]* /&\n/g'

You might solve this (as in your example) by a single line of code
cat part_Q.txt | tr $'\t' $'\n' | tr -s ' ' $'\n'
which
first translates a tab into a newline, and then
translates space(-s) as well
Note: For tr you will need the $ before the \tab and \newline characters in bash.
Since it has been mentioned, awk can help, too:
awk 'NF==2{print $1"\n"$2}' part_Q.txt
Where NF==2 even takes care about only using lines with 2 'words'.

Changing IFS=' ' to IFS=$'\t ' solved the problem.

Related

Read lines from a file and output with specific formatting with Bash

In A.csv, there are
1
2
3
4
How should I read this file and create variables $B and $C so that:
echo $B
echo $C
returns:
1 2 3 4
1,2,3,4
So far I am trying:
cat A.csv | while read A;
do
echo $A
done
It only returns
1
2
3
4
Assuming bash 4.x, the following is efficient, robust, and native:
# Read each line of A.csv into a separate element of the array lines
readarray -t lines <A.csv
# Generate a string B with a comma after each item in the array
printf -v B '%s,' "${lines[#]}"
# Prune the last comma from that string
B=${B%,}
# Generate a string C with a space after each item in the array
printf -v B '%s ' "${lines[#]}"
As #Cyrus said
B=$(cat A.csv)
echo $B
Will output:
1 2 3 4
Because bash will not carry the newlines if the variable is not wrapped in quotes. This is dangerous if A.csv contains any characters which might be affected by bash glob expansion, but should be fine if you are just reading simple strings.
If you are reading simple strings with no spaces in any of the elements, you can also get your desired result for $C by using:
echo $B | tr ' ' ','
This will output:
1,2,3,4
If lines in A.csv may contain bash special characters or spaces then we return to the loop.
For why I've formatted the file reading loop as I have, refer to: Looping through the content of a file in Bash?
B=''
C=''
while read -u 7 curr_line; do
if [ "$B$C" == "" ]; then
B="$curr_line"
C="$curr_line"
else
B="$B $curr_line"
C="$C,$curr_line"
fi
done 7<A.csv
echo "$B"
echo "$C"
Will construct the two variables as you desire using a loop through the file contents and should prevent against unwanted globbing and splitting.
B=$(cat A.csv)
echo $B
Output:
1 2 3 4
With quotes:
echo "$B"
Output:
1
2
3
4
I would read the file into a bash array:
mapfile -t array < A.csv
Then, with various join characters
b="${array[*]}" # space is the default
echo "$b"
c=$( IFS=","; echo "${array[*]}" )
echo "$c"
Or, you can use paste to join all the lines with a specified separator:
b=$( paste -d" " -s < A.csv )
c=$( paste -d"," -s < A.csv )
Try this :
cat A.csv | while read A;
do
printf "$A"
done
Regards!
Try This(Simpler One):
b=$(tr '\n' ' ' < file)
c=$(tr '\n' ',' < file)
You don't have to read File for that. Make sure you ran dos2unix file command. If you are running in windows(to remove \r).
Note: It will modify the file. So, make sure you copied from original file.

Output a file in two columns in BASH

I'd like to rearrange a file in two columns after the nth line.
For example, say I have a file like this here:
This is a bunch
of text
that I'd like to print
as two
columns starting
at line number 7
and separated by four spaces.
Here are some
more lines so I can
demonstrate
what I'm talking about.
And I'd like to print it out like this:
This is a bunch and separated by four spaces.
of text Here are some
that I'd like to print more lines so I can
as two demonstrate
columns starting what I'm talking about.
at line number 7
How could I do that with a bash command or function?
Actually, pr can do almost exactly this:
pr --output-tabs=' 1' -2 -t tmp1
↓
This is a bunch and separated by four spaces.
of text Here are some
that I'd like to print more lines so I can
as two demonstrate
columns starting what I'm talking about.
at line number 7
-2 for two columns; -t to omit page headers; and without the --output-tabs=' 1', it'll insert a tab for every 8 spaces it added. You can also set the page width and length (if your actual files are much longer than 100 lines); check out man pr for some options.
If you're fixed upon “four spaces more than the longest line on the left,” then perhaps you might have to use something a bit more complex;
The following works with your test input, but is getting to the point where the correct answer would be, “just use Perl, already;”
#!/bin/sh
infile=${1:-tmp1}
longest=$(longest=0;
head -n $(( $( wc -l $infile | cut -d ' ' -f 1 ) / 2 )) $infile | \
while read line
do
current="$( echo $line | wc -c | cut -d ' ' -f 1 )"
if [ $current -gt $longest ]
then
echo $current
longest=$current
fi
done | tail -n 1 )
pr -t -2 -w$(( $longest * 2 + 6 )) --output-tabs=' 1' $infile
↓
This is a bunch and separated by four spa
of text Here are some
that I'd like to print more lines so I can
as two demonstrate
columns starting what I'm talking about.
at line number 7
… re-reading your question, I wonder if you meant that you were going to literally specify the nth line to the program, in which case, neither of the above will work unless that line happens to be halfway down.
Thank you chatraed and BRPocock (and your colleague). Your answers helped me think up this solution, which answers my need.
function make_cols
{
file=$1 # input file
line=$2 # line to break at
pad=$(($3-1)) # spaces between cols - 1
len=$( wc -l < $file )
max=$(( $( wc -L < <(head -$(( line - 1 )) $file ) ) + $pad ))
SAVEIFS=$IFS;IFS=$(echo -en "\n\b")
paste -d" " <( for l in $( cat <(head -$(( line - 1 )) $file ) )
do
printf "%-""$max""s\n" $l
done ) \
<(tail -$(( len - line + 1 )) $file )
IFS=$SAVEIFS
}
make_cols tmp1 7 4
Could be optimized in many ways, but does its job as requested.
Input data (configurable):
file
num of rows borrowed from file for the first column
num of spaces between columns
format.sh:
#!/bin/bash
file=$1
if [[ ! -f $file ]]; then
echo "File not found!"
exit 1
fi
spaces_col1_col2=4
rows_col1=6
rows_col2=$(($(cat $file | wc -l) - $rows_col1))
IFS=$'\n'
ar1=($(head -$rows_col1 $file))
ar2=($(tail -$rows_col2 $file))
maxlen_col1=0
for i in "${ar1[#]}"; do
if [[ $maxlen_col1 -lt ${#i} ]]; then
maxlen_col1=${#i}
fi
done
maxlen_col1=$(($maxlen_col1+$spaces_col1_col2))
if [[ $rows_col1 -lt $rows_col2 ]]; then
rows=$rows_col2
else
rows=$rows_col1
fi
ar=()
for i in $(seq 0 $(($rows-1))); do
line=$(printf "%-${maxlen_col1}s\n" ${ar1[$i]})
line="$line${ar2[$i]}"
ar+=("$line")
done
printf '%s\n' "${ar[#]}"
Output:
$ > bash format.sh myfile
This is a bunch and separated by four spaces.
of text Here are some
that I'd like to print more lines so I can
as two demonstrate
columns starting what I'm talking about.
at line number 7
$ >

Cut column by column name in bash

I want to specify a column by name (i.e. 102), find the position of this column and then use something like cut -5,7- with the found position to delete the specified column.
This is my file header (delim = "\t"):
#CHROM POS 1 100 101 102 103 107 108
This awk should work:
awk -F'\t' -v c="102" 'NR==1{for (i=1; i<=NF; i++) if ($i==c){p=i; break}; next} {print $p}' file
Here's one possible solution without the restriction that only one column is to be removed. It is written as a bash function, where the first argument is the filename, and the remaining arguments are the columns to exclude.
rmcol() {
local file=$1
shift
cut -f$(head -n1 "$file" | tr \\t \\n | grep -vFxn "${#/#/-e}" |
cut -d: -f1 | paste -sd,) "$file"
}
If you want to select rather than exclude the named columns, then change -vFxn to -Fxn.
That almost certainly requires some sort of explanation. The first two lines of the function just removes the filename from the arguments and stores it for later use. The cut command will then select the appropriate columns; the column numbers are computed with the complicated pipeline which follows:
head -n1 "$file" | # Take the first line of the file
tr \\t \\n | # Change all the tabs to newlines [ Note 1]
grep # Select all lines (i.e. column names) which
-v # don't match
F # the literal string
x # which is the complete line
n # and include the line number in the output
"${#/#/-e}" | # Put -e at the beginning of each command line argument,
# converting the arguments into grep pattern arguments (-e)
cut -d: -f1 | # Select only the line number from that matches
paste -sd, # Paste together all the line numbers, separated with commas.
Using a for loop in bash:
C=1; for i in $(head file -n 1) ; do if [ $i == "102" ] ; then break ; else C=$(( $C + 1 )) ; fi ; done ; echo $C
And a full script
C=1
for i in $(head in_file -n 1) ; do
echo $i
if [ $i == "102" ] ; then
break ;
else
echo $C
C=$(( $C + 1 ))
fi
done
cut -f1-$(($C-1)),$(($C+1))- in_file
trying a solution without looping through columns, I get:
#!/bin/bash
pick="$1"
titles="pos 1 100 102 105"
tmp=" $titles "
tmp="${tmp%% $pick* }"
tmp=($tmp)
echo "column ${#tmp[#]}"
It suffers from incorrectly reporting last column if column name can't be found.
Try this small awk utility to cut specific headers - https://github.com/rohitprajapati/toyeca-cutter
Example usage -
awk -f toyeca-cutter.awk -v c="col1, col2, col3, col4" my_file.csv

Correctly count number of lines a bash variable

I need to count the number of lines of a given variable. For example I need to find how many lines VAR has, where VAR=$(git log -n 10 --format="%s").
I tried with echo "$VAR" | wc -l), which indeed works, but if VAR is empty, is prints 1, which is wrong. Is there a workaround for this? Something better than using an if clause to check whether the variable is empty...(maybe add a line and subtract 1 from the returned value?).
The wc counts the number of newline chars. You can use grep -c '^' for counting lines.
You can see the difference with:
#!/bin/bash
count_it() {
echo "Variablie contains $2: ==>$1<=="
echo -n 'grep:'; echo -n "$1" | grep -c '^'
echo -n 'wc :'; echo -n "$1" | wc -l
echo
}
VAR=''
count_it "$VAR" "empty variable"
VAR='one line'
count_it "$VAR" "one line without \n at the end"
VAR='line1
'
count_it "$VAR" "one line with \n at the end"
VAR='line1
line2'
count_it "$VAR" "two lines without \n at the end"
VAR='line1
line2
'
count_it "$VAR" "two lines with \n at the end"
what produces:
Variablie contains empty variable: ==><==
grep:0
wc : 0
Variablie contains one line without \n at the end: ==>one line<==
grep:1
wc : 0
Variablie contains one line with \n at the end: ==>line1
<==
grep:1
wc : 1
Variablie contains two lines without \n at the end: ==>line1
line2<==
grep:2
wc : 1
Variablie contains two lines with \n at the end: ==>line1
line2
<==
grep:2
wc : 2
You can always write it conditionally:
[ -n "$VAR" ] && echo "$VAR" | wc -l || echo 0
This will check whether $VAR has contents and act accordingly.
For a pure bash solution: instead of putting the output of the git command into a variable (which, arguably, is ugly), put it in an array, one line per field:
mapfile -t ary < <(git log -n 10 --format="%s")
Then you only need to count the number of fields in the array ary:
echo "${#ary[#]}"
This design will also make your life simpler if, e.g., you need to retrieve the 5th commit message:
echo "${ary[4]}"
try:
echo "$VAR" | grep ^ | wc -l

How to delete all lines containing more than three characters in the second column of a CSV file?

How can I delete all of the lines in a CSV file which contain more than 3 characters in the second column? E.g.:
cave,ape,1
tree,monkey,2
The second line contains more than 3 characters in the second column, so it will be deleted.
awk -F, 'length($2)<=3' input.txt
You can use this command:
grep -vE "^[^,]+,[^,]{4,}," test.csv > filtered.csv
Breakdown of the grep syntax:
-v = remove lines matching
-E = extended regular expression syntax (also -P is perl syntax)
bash stuff:
> filename = overwrite/create a file and fill it with the standard out
Breakdown of the regex syntax:
"^[^,]+,[^,]{4,},"
^ = beginning of line
[^,] = anything except commas
[^,]+ = 1 or more of anything except commas
, = comma
[^,]{4,} = 4 or more of anything except commas
And please note that the above is simplified and would not work if the first 2 columns contained commas in the data. (it does not know the difference between escaped commas and raw ones)
No one has supplied a sed answer yet, so here it is:
sed -e '/^[^,]*,[^,]\{4\}/d' animal.csv
And here's some test data.
>animal.csv cat <<'.'
cave,ape,0
,cat,1
,orangutan,2
large,wolf,3
,dog,4,happy
tree,monkey,5,sad
.
And now to test:
sed -i'' -e '/^[^,]*,[^,]\{4\}/d' animal.csv
cat animal.csv
Only ape, cat and dog should appear in the output.
This is a filter script for your type of data. It assumes your data is in utf8
#!/bin/bash
function px {
local a="$#"
local i=0
while [ $i -lt ${#a} ]
do
printf \\x${a:$i:2}
i=$(($i+2))
done
}
(iconv -f UTF8 -t UTF16 | od -x | cut -b 9- | xargs -n 1) |
if read utf16header
then
px $utf16header
cnt=0
out=''
st=0
while read line
do
if [ "$st" -eq 1 ] ; then
cnt=$(($cnt+1))
fi
if [ "$line" == "002c" ] ; then
st=$(($st+1))
fi
if [ "$line" == "000a" ]
then
out=$out$line
if [[ $cnt -le 3+1 ]] ; then
px $out
fi
cnt=0
out=''
st=0
else
out=$out$line
fi
done
fi | iconv -f UTF16 -t UTF8

Resources