Bash: split by comma with special characters - bash

I have a list that is comma delimited like so...
00:00:00:00:00:00,Bob's Laptop,11111111111111111
00:00:00:00:00:00,Mom & Dad's Computer,22222222222222222
00:00:00:00:00:00,Kitchen,33333333333333333
I'm trying to loop over these lines and populate variables with the 3 columns in each row. My script works when the data has no spaces, ampersands, or apostrophes. When it does have those then it doesn't work right. Here is my script:
for line in $(cat list)
do
arr=(`echo $line | tr "," "\n"`)
echo "Field1: ${arr[0]}"
echo "Field2: ${arr[1]}"
echo "Field3: ${arr[2]}"
done
If one of you bash gurus can point out how I can get this script to work with my list I would greatly appreciate it!
EV

while IFS=, read field1 field2 field3
do
echo $field1
echo $field2
echo $field3
done < list

Can you use awk?
awk -F',' '{print "Field1: " $1 "\nField2: " $2 "\nField3: " $3}'

Do not read lines with a for loop. Use read instead
while IFS=, read -r -a line;
do
printf "%s\n" "${line[0]}" "${line[1]}" "${line[2]}";
done < list
Or, using array slicing
while IFS=, read -r -a line;
do
printf "%s\n" "${line[#]:0:3}";
done < list

Related

program that prints input in bash

I am new to bash and I struggling with a program. I want to write a program that first asks for user input and afterwards prints the words with an \n(blank line) between them. The last echo contains the amount of characters that is written. Also the output can only contain the words and no digits. E.g:
Input: hallo1 user2 Pete4
Ouput: hallo
user
Pete
13 Characters
This is my code for the time beeing.
echo Typ one or multiple words:
read varname
arr=( "${arr[#]}" "$varname" )
for i in "${arr[#]}"; do
echo "$i"
done
echo ${arr[#]}
# printf '%s\n' "${arr[#]}"
Try this. Works for me. I added in the for the sentence to remove the digits.
And after the for, I first remove the spaces between the names and then I count the total of characters using the # in ${#aux}. I added the parameter -n in the first echo too, just to break the line with the second one.
echo Type one or multiple words:
read varname
arr=( "${arr[#]}" "$varname" )
for i in "${arr[#]//[[:digit:]]/}"; do
echo -n "$i"
done
aux=$(echo "${i}" | sed "s/ //g")
echo " " ${#aux} " Characters"
An approach in plain bash without using an array:
#!/bin/bash
echo 'Type one or multiple words on a line:'
read -r
words_without_digits=${REPLY//[0-9]}
line_without_blanks=${words_without_digits//[[:blank:]]}
printf '%s\n' $words_without_digits
echo "${#line_without_blanks} Characters"
echo Type one or multiple words:
read varname
arr=( "${arr[#]}" "$varname" )
for i in "${arr[#]//[[:digit:]]/}"; do
printf '%s\n' $i
done
aux=$(echo "${i}" | sed "s/ //g")
echo ${#aux} " Characters"

How can i add quotes around each words stored in a variable in shell script

I have a variable foo.
echo "print foo" "$foo" ---> abc,bc,cde
I wanted to put quotes around each variable.
Expected result = 'abc','bc','cde'.
I have tried this way, but its not working:
join_lines() {
local IFS=${1:-,}
set --
while IFS= read -r line; do set -- "$#" "$'line'"; done
echo "$*"
}
Could you please try following, strictly written and tested with shown samples in GNU awk.
Without loop:
var="abc,bc,cde"
echo "$var" | awk -v s1="'" 'BEGIN{FS=",";OFS="\047,\047"} {$1=$1;$0=s1 $0 s1} 1'
With loop usual way to go through all fields(comma separated):
var="abc,bc,cde"
echo "$var" | awk -v s1="'" 'BEGIN{FS=OFS=","} {for(i=1;i<=NF;i++){$i=s1 $i s1}} 1'
Output will be 'abc','bc','cde'.
As alternative, using 'sed: replacing every 'with'', and adding ' at the beginning and end of the line to wrap the first/last tokens.
sed -e "s/^/'/" -e "s/$/'/" -e "s/,/','/g"
On surface, the question is on how to convert comma separated list of values (stored in a shell variable) into a comma separate list of quoted tokens. Extending the logic provided by OP, but using shell arrays
foo="abc,bc,cde"
IFS=, read -a items <<< "$foo"
result=
for r in "${items[#]}" ; do
[ "$result" ] && result+=","
result+="'$r'"
done
echo "RESULT=$result"
If needed, logic can be placed into a function/filter
function join_lines {
local -a items
local input result
while IFS=, read -a items ; do
result=
for r in "${items[#]}" ; do
[ "$result" ] && result+=","
result+="'$r'"
done
echo "$result"
done
}

Split a string in bash based on delimiter

I have a file log_file which has contents such as
CCO O-MR1 Sync:No:3:No:346:Yes
CCO P Sync:No:1:No:106:Yes
CCO P Checkout:Yes:1:No:10:No
CCO O-MR1 Checkout(2.2):Yes:1:No:10:No
I am trying to obtain the 4 fields based on ":" delimiter
The script that I have is
#!/bin/bash
log_file=$1
for i in `cat $log_file` ; do
echo $i
field_a=`echo $i | awk -F '[:]' '{print $1}'`
echo $field_a
field_b=`echo $i | awk -F '[:]' '{print $2}'`
echo $lfield_b
...
done
but the value that this code gives for field_a is wrong, it splits the line based on " " delimiter.
echo $i also prints wrong value.
What else can I use to correct this?
This is covered in detail in BashFAQ #1. To summarize, use a while read loop with IFS set to contain (only) the characters that should be used to split fields.
while IFS=: read -r field_a field_b other_fields; do
echo "field_a is $field_a"
echo "field_b is $field_b"
echo "Remaining fields are $other_fields"
done <"$log_file"

Reverse the words but keep the order Bash

I have a file with lines. I want to reverse the words, but keep them in same order.
For example: "Test this word"
Result: "tseT siht drow"
I'm using MAC, so awk doesn't seem to work.
What I got for now
input=FILE_PATH
while IFS= read -r line || [[ -n $line ]]
do
echo $line | rev
done < "$input"
Here is a solution that completely avoids awk
#!/bin/bash
input=./data
while read -r line ; do
for word in $line ; do
output=`echo $word | rev`
printf "%s " $output
done
printf "\n"
done < "$input"
In case xargs works on mac:
echo "Test this word" | xargs -n 1 | rev | xargs
Inside your read loop, you can just iterate over the words of your string and pass them to rev
line="Test this word"
for word in "$line"; do
echo -n " $word" | rev
done
echo # Add final newline
output
tseT siht drow
You are actually in fairly good shape with bash. You can use string-indexes and string-length and C-style for loops to loop over the characters in each word building a reversed string to output. You can control formatting in a number of ways to handle spaces between words, but a simple flag first=1 is about as easy as anything else. You can do the following with your read,
#!/bin/bash
while read -r line || [[ -n $line ]]; do ## read line
first=1 ## flag to control space
a=( $( echo $line ) ) ## put line in array
for i in "${a[#]}"; do ## for each word
tmp= ## clear temp
len=${#i} ## get length
for ((j = 0; j < len; j++)); do ## loop length times
tmp="${tmp}${i:$((len-j-1)):1}" ## add char len - j to tmp
done
if [ "$first" -eq '1' ]; then ## if first word
printf "$tmp"; first=0; ## output w/o space
else
printf " $tmp" ## output w/space
fi
done
echo "" ## output newline
done
Example Input
$ cat dat/lines2rev.txt
my dog has fleas
the cat has none
Example Use/Output
$ bash revlines.sh <dat/lines2rev.txt
ym god sah saelf
eht tac sah enon
Look things over and let me know if you have questions.
Using rev and awk
Consider this as the sample input file:
$ cat file
Test this word
Keep the order
Try:
$ rev <file | awk '{for (i=NF; i>=2; i--) printf "%s%s",$i,OFS; print $1}'
tseT siht drow
peeK eht redro
(This uses awk but, because it uses no advanced awk features, it should work on MacOS.)
Using in a script
If you need to put the above in a script, then create a file like:
$ cat script
#!/bin/bash
input="/Users/Anastasiia/Desktop/Tasks/test.txt"
rev <"$input" | awk '{for (i=NF; i>=2; i--) printf "%s%s",$i,OFS; print $1}'
And, run the file:
$ bash script
tseT siht drow
peeK eht redro
Using bash
while read -a arr
do
x=" "
for ((i=0; i<${#arr}; i++))
do
((i == ${#arr}-1)) && x=$'\n'
printf "%s%s" $(rev <<<"${arr[i]}") "$x"
done
done <file
Applying the above to our same test file:
$ while read -a arr; do x=" "; for ((i=0; i<${#arr}; i++)); do ((i == ${#arr}-1)) && x=$'\n'; printf "%s%s" $(rev <<<"${arr[i]}") "$x"; done; done <file
tseT siht drow
peeK eht redro

How to concatenate all lines from a file in Bash? [duplicate]

This question already has answers here:
How to concatenate multiple lines of output to one line?
(12 answers)
Closed 4 years ago.
I have a file csv :
data1,data2,data2
data3,data4,data5
data6,data7,data8
I want to convert it to (Contained in a variable):
variable=data1,data2,data2%0D%0Adata3,data4,data5%0D%0Adata6,data7,data8
My attempt :
data=''
cat csv | while read line
do
data="${data}%0D%0A${line}"
done
echo $data # Fails, since data remains empty (loop emulates a sub-shell and looses data)
Please help..
Simpler to just strip newlines from the file:
tr '\n' '' < yourfile.txt > concatfile.txt
In bash,
data=$(
while read line
do
echo -n "%0D%0A${line}"
done < csv)
In non-bash shells, you can use `...` instead of $(...). Also, echo -n, which suppresses the newline, is unfortunately not completely portable, but again this will work in bash.
Some of these answers are incredibly complicated. How about this.
data="$(xargs printf ',%s' < csv | cut -b 2-)"
or
data="$(tr '\n' ',' < csv | cut -b 2-)"
Too "external utility" for you?
IFS=$'\n', read -d'\0' -a data < csv
Now you have an array! Output it however you like, perhaps with
data="$(tr ' ' , <<<"${data[#]}")"
Still too "external utility?" Well fine,
data="$(printf "${data[0]}" ; printf ',%s' "${data[#]:1:${#data}}")"
Yes, printf can be a builtin. If it isn't but your echo is and it supports -n, use echo -n instead:
data="$(echo -n "${data[0]}" ; for d in "${data[#]:1:${#data[#]}}" ; do echo -n ,"$d" ; done)"
Okay, now I admit that I am getting a bit silly. Andrew's answer is perfectly correct.
I would much prefer a loop:
for line in $(cat file.txt); do echo -n $line; done
Note: This solution requires the input file to have a new line at the end of the file or it will drop the last line.
Another short bash solution
variable=$(
RS=""
while read line; do
printf "%s%s" "$RS" "$line"
RS='%0D%0A'
done < filename
)
awk 'END { print r }
{ r = r ? r OFS $0 : $0 }
' OFS='%0D%0A' infile
With shell:
data=
while IFS= read -r; do
[ -n "$data" ] &&
data=$data%0D%0A$REPLY ||
data=$REPLY
done < infile
printf '%s\n' "$data"
Recent bash versions:
data=
while IFS= read -r; do
[[ -n $data ]] &&
data+=%0D%0A$REPLY ||
data=$REPLY
done < infile
printf '%s\n' "$data"
A very simple single-line solution which requires no extra files as its quite easy to understand (I think, just cat the file together and perform sed-replace):
output=$(echo $(cat ./myFile.txt) | sed 's/ /%0D%0A/g')
Useless use of cat, punished! You want to feed the CSV into the loop
while read line; do
# ...
done < csv

Resources