bash: separate blocks of lines between pattern x and y - bash

I have a similar question to this one Sed/Awk - pull lines between pattern x and y, however, in my case I want to output each block-of-lines to individual files (named after the first pattern).
Input example:
-- filename: query1.sql
-- sql comments goes here or else where
select * from table1
where id=123;
-- eof
-- filename: query2.sql
insert into table1
(id, date) values (1, sysdate);
-- eof
I want the bash script to generate 2 files: query1.sql and query2.sql with the following content:
query1.sql:
-- sql comments goes here or else where
select * from table1
where id=123;
query2.sql:
insert into table1
(id, date) values (1, sysdate);
Thank you

awk '/-- filename/{if(f)close(f); f=$3;next} !/eof/&&/./{print $0 >> f}' input
Brief explanation,
-- filename{if(f)close(f); f=$3;next}: locate the record contains filename, and assign it to f
!/eof/&&/./{print $0 >> f}: if following lines don't contain 'eof' neither empty, save it to the corresponding file.

This might work for you (GNU sed):
sed -r '/-- filename: (\S+)/!d;s##/&/,/-- eof/{//d;w \1#p;s/.*/}/p;d' file |
sed -nf - file
Create a sed script from the input file and run it against the input file
N.B. Two lines are needed for each query as the program for the query must be surrounded by braces and the w command must end in a newline.

Using GNU awk to handle multiple open files for you:
awk '/^-- eof/{f=0} f{print > out} /^-- filename/{out=$3; f=1}' file
or with any awk:
awk '/^-- eof/{f=0} f{print > out} /^-- filename/{close(out); out=$3; f=1}' file

Related

Regex for printing pattern from string

i have a file with below content. i need to separate the content into 2 files
o/p1 should have content everything within first braces () and ` removed and only 1&2 columns printed.
o/p2 should have location with its value
$ cat dt.txt
CREATE EXTERNAL TABLE `rte.fteff_ft`(
`dt` date,
`wk_id` int,
`yq_id` int(10,00),
`te_ind` string,
`yw_dt` date,
`em_dt` date comment dfdsf sdfsdf)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '\u0007'
LINES TERMINATED BY '\n'
STORED AS INPUTFORMAT
'org.apache.hadoop.mapred.TextInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
LOCATION
'hdfs://dfdf/data/ffff/ODE/TdddfT/'
TBLPROPERTIES (
'last_modified_by'='asdas',
'last_modified_time'='1639551681',
'numFiles'='1',
'totalSize'='2848434',
'transient_lastDdlTime'='1639551681')
i need output from the above in two files.
o/p1: a.txt
dt date,
wk_id int,
yq_id int(10,00),
te_ind string,
yw_dt date,
em_dt date
o/p2: b.txt
LOCATION
'hdfs://dfdf/data/ffff/ODE/TdddfT/'
First, use sed to run a couple of commands, to operate on the range of lines between 'CREATE EXTERNAL' and 'ROW DELIMITED FORMAT' where they occur at the start of the line, not including those lines. Then replace grave accent marks with nothing, then keep only the first 2 words.
sed -E '/CREATE EXTERNAL/,/ROW FORMAT DELIMITED/!d;//d;s/`//g; s/(([^ ]+ ){2}).*/\1/' dt.txt > a.txt
EDIT: To remove the commas at the end of the line, add another command of s/,$// . Make sure to anchor the comma to the end of the line else you'll get the comma in the int declaration.
sed -E '/CREATE EXTERNAL/,/ROW FORMAT DELIMITED/!d;//d;s/`//g;s/,$//; s/(([^ ]+ ){2}).*/\1/' dt.txt > a.txt
Second, use the -A option to grep to match the word 'LOCATION' on a line by itself plus the following 1 line.
grep -A 1 '^LOCATION$' dt.txt > b.txt

Reading multiple lines using read line do

OK i'm an absolute noob to this (only started trying to code a few weeks ago for my job) so please go easy on me
IM on an aix system
I have file1, file2 and file3 and they all contain 1 column of data (text or numerical).
file1
VBDSBQ_KFGP_SAPECC_PRGX_ACCNT_WKLY
VBDSBQ_KFGP_SAPECC_PRGX_ADDRM_WKLY
VBDSBQ_KFGP_SAPECC_PRGX_COND_WKLY
VBDSBQ_KFGP_SAPECC_PRGX_CUSTM_WKLY
VBDSBQ_KFGP_SAPECC_PRGX_EPOS_DLY
VBDSBQ_KFGP_SAPECC_PRGX_INVV_WKLY
file2
MCMILS03
HGAHJK05
KARNEK93
MORROT32
LAWFOK12
LEMORK82
file3
8970597895
0923875
89760684
37960473
526238495
146407
There will be the exact same amount of lines in each of these files.
I have another file called "dummy_file" which is what i want to pull out, replace parts and pop into a new file.
WORKSTATION#JOB_NAME
SCRIPTNAME "^TWSSCRIPTS^SCRIPT"
STREAMLOGON "^TWSUSER^"
-job JOB_NAME -user USER_ID -i JOB_ID
RECOVERY STOP
There are only 3 strings i care about in this file that i want replaced and they will always be the same for the dummy files i use in future
JOB_NAME
JOB_ID
USER_ID
There are 2 entries for JOB_NAME and only 1 for the others. What i want is take the raw file, replace both JOB_NAME entries with line 1 from file1 then replace USER_ID with line 1 from file 2 and then replace JOB_ID with line 1 from file3 then throw this into a new file
I want to repeat the process for all the lines in file 1, 2 and 3 so the next one will have its entries replaced by line 2 from the 3 files then next one will have its entries replaced by line 3 from the 3 files then all of line 3 from the files and so on and so on
raw file and the expected output are below:
WORKSTATION#JOB_NAME
SCRIPTNAME "^TWSSCRIPTS^SCRIPT"
STREAMLOGON "^TWSUSER^"
-job JOB_NAME -user USER_ID -i JOB_ID
RECOVERY STOP
WORKSTATION#VBDSBQ_KFGP_SAPECC_PRGX_ACCNT_WKLY
SCRIPTNAME "^TWSSCRIPTS^SCRIPT"
STREAMLOGON "^TWSUSER^"
-job VBDSBQ_KFGP_SAPECC_PRGX_ACCNT_WKLY -user MCMILS03 -i 8970597895
RECOVERY STOP
this is as far as i got (again i know its crap)
file="/dir/dir/dir/file1"
while IFS= read -r line
do
cat dummy_file | sed "s/JOB_NAME/$file1/" | sed "s/JOB_ID/$file2/" | sed "s/USER_ID/$file3" #####this is where i get stuck as i dont know how to reference file2 and file3##### >>new_file.txt
done
You really don't want a do/while loop in the shell. Just do:
awk '/^WORKSTATION/{
getline jobname < "file1";
getline user_id < "file2";
getline job_id < "file3"
}
{
gsub("JOB_NAME", jobname);
gsub("USER_ID", user_id);
gsub("JOB_ID", job_id)
}1' dummy_file
This might work for you (GNU parallel and sed):
parallel -q sed 's/JOB_NAME/{1}/;s/USER_ID/{2}/;s/JOB_ID/{3}/' templateFile >newFile :::: file1 ::::+ file2 ::::+ file3
This creates newFile by appending the templateFile for each instance of a line jointly in file1, file2 and file3.
N.B. the ::::+ operation ensures the union of lines in file1, file2 and file3 rather than the default product.
Using GNU awk (ARGIND and 2d arrays):
$ gawk '
NR==FNR { # store the template file
t=t (t==""?"":ORS) $0 # to t var
next
}
{
a[FNR][ARGIND]=$0 # store filen records to 2d array
}
END { # in the end
for(i=1;i<=FNR;i++) { # for each record stored from filen
t_out=t # make a working copy of the template
gsub(/JOB_NAME/,a[i][2],t_out) # replace with data
gsub(/USER_ID/,a[i][3],t_out)
gsub(/JOB_ID/,a[i][4],t_out)
print t_out # output
}
}' template file1 file2 file3
Output:
WORKSTATION#VBDSBQ_KFGP_SAPECC_PRGX_ACCNT_WKLY
SCRIPTNAME "^TWSSCRIPTS^SCRIPT"
STREAMLOGON "^TWSUSER^"
-job VBDSBQ_KFGP_SAPECC_PRGX_ACCNT_WKLY -user MCMILS03 -i 8970597895
RECOVERY STOP
...
Bash variant
#!/bin/bash
exec 5<file1 # create file descriptor for file with job names
exec 6<file2 # create file descriptor for file with job ids
exec 7<file3 # create file descriptor for file with user ids
dummy=$(cat dummy_txt) # load dummy text
output () { # create output by inserting new values in a copy of dummy var
out=${dummy//JOB_NAME/$JOB_NAME}
out=${out//USER_ID/$USER_ID}
out=${out//JOB_ID/$JOB_ID}
printf "\n$out\n"
}
while read -u5 JOB_NAME; do # this will read from all files and print output
read -u6 JOB_id
read -u7 USER_ID
output
done
From read help
$ read --help
...
-u fd read from file descriptor FD instead of the standard input
...
And a variant with paste
#!/bin/bash
dummy=$(cat dummy)
while read JOB_NAME JOB_id USER_ID; do
out=${dummy//JOB_NAME/$JOB_NAME}
out=${out//USER_ID/$USER_ID}
out=${out//JOB_ID/$JOB_ID}
printf "\n$out\n"
done < <(paste file1 file2 file3)

bash / sed / awk Remove or gsub timestamp pattern from text file

I have a text file like this:
1/7/2017 12:53 DROP TABLE table1
1/7/2017 12:53 SELECT
1/7/2017 12:55 --UPDATE #dat_recency SET
Select * from table 2
into table 3;
I'd like to remove all of the timestamp patterns (M/D/YYYY HH:MM, M/DD/YYYY HH:MM, MM/D/YYYY HH:MM, MM/DD/YYYY HH:MM). I can find the patterns using grep but can't figure out how to use gsub. Any suggestions?
DESIRED OUTPUT:
DROP TABLE table1
SELECT
--UPDATE #dat_recency SET
Select * from table 2
into table 3;
You can use this sed command to remove data/time stamps from line start:
sed -i.bak -E 's~([0-9]{1,2}/){2}[0-9]{4} [0-9]{2}:[0-9]{2} *~~' file
cat file
DROP TABLE table1
SELECT
--UPDATE #dat_recency SET
Select * from table 2
into table 3;
Use the default space separator, make first and second columns to empty string and then print the whole line.
awk '/^[0-9]/{$1=$2="";gsub(/^[ \t]+|[ \t]+$/, "")} !/^[0-9]/{print}' sample.csv
the command checks each line whether starts with numeric or not, if it is replace the first 2 columns with empty strings and remove leading spaces; otherwise print the original line.
output:
DROP TABLE table1
SELECT
--UPDATE #dat_recency SET
Select * from table 2
into table 3;

How to use sed to insert a line before each line in a file with the original line's content surrounding by a string?

I am trying to use sed (GNU sed version 4.2.1) to insert a line before each line in a file with that line's content surrounding by a string.
Input:
truncate table ALPHA;
truncate table BETA;
delete from TABLE_CHARLIE where ID=1;
Expected Result:
SELECT 'truncate table ALPHA;' from dual;
truncate table ALPHA;
SELECT 'truncate table BETA;' from dual;
truncate table BETA;
SELECT 'delete from TABLE_CHARLIE where ID=1;' from dual;
delete from TABLE_CHARLIE where ID=1;
I have tried to make use of the ampersand (&) special character, but this does not seem to work. If I put anything after the ampersand on the replacement string, the output is not correct.
Attempt 1:
sed -e "s/\(.*\)/SELECT '&\n&/g" input.txt
output:
SELECT 'truncate table ALPHA;
truncate table ALPHA;
SELECT 'truncate table BETA;
truncate table BETA;
SELECT 'delete from TABLE_CHARLIE where ID=1;
delete from TABLE_CHARLIE where ID=1;
With the preceding code, I get the SELECT ' as expected, but once I attempt to add ' from dual; to the right side of string, things get out of whack.
Attempt 2:
sed -e "s/\(.*\)/SELECT '&' from dual;\n&/g" input.txt
output:
' from dual;cate table ALPHA;
truncate table ALPHA;
' from dual;cate table BETA;
truncate table BETA;
SELECT 'delete from TABLE_CHARLIE where ID=1;' from dual;
You can take advantage of the hold space to temporarily store the original line.
sed "h;s/.*/'SELECT '&' from dual;/;p;g" input.txt
or more readably:
sed "
h
s/.*/'SELECT '&' from dual;/
p
g" input.txt
Here's a breakdown of the command.
First, each line of the input is placed in the pattern space.
The h command copies the contents of the pattern space to the hold space.
The s command performs a substitution on the pattern space. The & represents whatever was matched. This command leaves the hold space unaffected.
The p command outputs the contents of the pattern space to standard output.
The g command copies the contents of the hold space to the pattern space.
By default, the contents of the pattern space are written to standard output before reading the next input line.
As Glenn Jackman points out, you can replace p;g with G. This builds up a two-line value in the pattern space that is then printed, rather than print two separate pattern spaces.
sed "h;s/.*/'SELECT '&' from dual;/;G" input.txt
Also, you can add comments to the sed command so that you can understand what the line noise does later :), if this is in a script.
sed "
# The input line is first copied to the pattern space
h # Copy the pattern space to the hold space
s/.*/'SELECT '&' from dual;/ # Modify the pattern space
p # Print the (modified) pattern space
g # Copy the hold space to the pattern space
# The output of the pattern space (the original input line) is now printed
" input.txt
If you're looking for an alternative to sed, these work:
awk '{printf "SELECT '\''%s'\'' from dual;\n%s\n", $0, $0}' file
perl -lpe "print qq{SELECT '\$_' from dual;}" file
Your second attempt works on both 4.2.1 and 4.2.2 versions of sed. I received same invalid input when I tried to save your input file with windows line endings (line feed and carriage return).
Use this command on your input file before running your sed command:
tr -d '\15\32' < winfile.txt > unixfile.txt
Or as you suggest, simply by using the dos2unix utility.
Here's how to do it with awk:
awk -v PRE="SELECT '" -v SU="' from dual;" '{print PRE$0SU; print}'`

How to extract one column of a csv file

If I have a csv file, is there a quick bash way to print out the contents of only any single column? It is safe to assume that each row has the same number of columns, but each column's content would have different length.
You could use awk for this. Change '$2' to the nth column you want.
awk -F "\"*,\"*" '{print $2}' textfile.csv
yes. cat mycsv.csv | cut -d ',' -f3 will print 3rd column.
The simplest way I was able to get this done was to just use csvtool. I had other use cases as well to use csvtool and it can handle the quotes or delimiters appropriately if they appear within the column data itself.
csvtool format '%(2)\n' input.csv
Replacing 2 with the column number will effectively extract the column data you are looking for.
Landed here looking to extract from a tab separated file. Thought I would add.
cat textfile.tsv | cut -f2 -s
Where -f2 extracts the 2, non-zero indexed column, or the second column.
Here is a csv file example with 2 columns
myTooth.csv
Date,Tooth
2017-01-25,wisdom
2017-02-19,canine
2017-02-24,canine
2017-02-28,wisdom
To get the first column, use:
cut -d, -f1 myTooth.csv
f stands for Field and d stands for delimiter
Running the above command will produce the following output.
Output
Date
2017-01-25
2017-02-19
2017-02-24
2017-02-28
To get the 2nd column only:
cut -d, -f2 myTooth.csv
And here is the output
Output
Tooth
wisdom
canine
canine
wisdom
incisor
Another use case:
Your csv input file contains 10 columns and you want columns 2 through 5 and columns 8, using comma as the separator".
cut uses -f (meaning "fields") to specify columns and -d (meaning "delimiter") to specify the separator. You need to specify the latter because some files may use spaces, tabs, or colons to separate columns.
cut -f 2-5,8 -d , myvalues.csv
cut is a command utility and here is some more examples:
SYNOPSIS
cut -b list [-n] [file ...]
cut -c list [file ...]
cut -f list [-d delim] [-s] [file ...]
I think the easiest is using csvkit:
Gets the 2nd column:
csvcut -c 2 file.csv
However, there's also csvtool, and probably a number of other csv bash tools out there:
sudo apt-get install csvtool (for Debian-based systems)
This would return a column with the first row having 'ID' in it.
csvtool namedcol ID csv_file.csv
This would return the fourth row:
csvtool col 4 csv_file.csv
If you want to drop the header row:
csvtool col 4 csv_file.csv | sed '1d'
First we'll create a basic CSV
[dumb#one pts]$ cat > file
a,b,c,d,e,f,g,h,i,k
1,2,3,4,5,6,7,8,9,10
a,b,c,d,e,f,g,h,i,k
1,2,3,4,5,6,7,8,9,10
Then we get the 1st column
[dumb#one pts]$ awk -F , '{print $1}' file
a
1
a
1
Many answers for this questions are great and some have even looked into the corner cases.
I would like to add a simple answer that can be of daily use... where you mostly get into those corner cases (like having escaped commas or commas in quotes etc.,).
FS (Field Separator) is the variable whose value is dafaulted to
space. So awk by default splits at space for any line.
So using BEGIN (Execute before taking input) we can set this field to anything we want...
awk 'BEGIN {FS = ","}; {print $3}'
The above code will print the 3rd column in a csv file.
The other answers work well, but since you asked for a solution using just the bash shell, you can do this:
AirBoxOmega:~ d$ cat > file #First we'll create a basic CSV
a,b,c,d,e,f,g,h,i,k
1,2,3,4,5,6,7,8,9,10
a,b,c,d,e,f,g,h,i,k
1,2,3,4,5,6,7,8,9,10
a,b,c,d,e,f,g,h,i,k
1,2,3,4,5,6,7,8,9,10
a,b,c,d,e,f,g,h,i,k
1,2,3,4,5,6,7,8,9,10
a,b,c,d,e,f,g,h,i,k
1,2,3,4,5,6,7,8,9,10
a,b,c,d,e,f,g,h,i,k
1,2,3,4,5,6,7,8,9,10
And then you can pull out columns (the first in this example) like so:
AirBoxOmega:~ d$ while IFS=, read -a csv_line;do echo "${csv_line[0]}";done < file
a
1
a
1
a
1
a
1
a
1
a
1
So there's a couple of things going on here:
while IFS=, - this is saying to use a comma as the IFS (Internal Field Separator), which is what the shell uses to know what separates fields (blocks of text). So saying IFS=, is like saying "a,b" is the same as "a b" would be if the IFS=" " (which is what it is by default.)
read -a csv_line; - this is saying read in each line, one at a time and create an array where each element is called "csv_line" and send that to the "do" section of our while loop
do echo "${csv_line[0]}";done < file - now we're in the "do" phase, and we're saying echo the 0th element of the array "csv_line". This action is repeated on every line of the file. The < file part is just telling the while loop where to read from. NOTE: remember, in bash, arrays are 0 indexed, so the first column is the 0th element.
So there you have it, pulling out a column from a CSV in the shell. The other solutions are probably more practical, but this one is pure bash.
You could use GNU Awk, see this article of the user guide.
As an improvement to the solution presented in the article (in June 2015), the following gawk command allows double quotes inside double quoted fields; a double quote is marked by two consecutive double quotes ("") there. Furthermore, this allows empty fields, but even this can not handle multiline fields. The following example prints the 3rd column (via c=3) of textfile.csv:
#!/bin/bash
gawk -- '
BEGIN{
FPAT="([^,\"]*)|(\"((\"\")*[^\"]*)*\")"
}
{
if (substr($c, 1, 1) == "\"") {
$c = substr($c, 2, length($c) - 2) # Get the text within the two quotes
gsub("\"\"", "\"", $c) # Normalize double quotes
}
print $c
}
' c=3 < <(dos2unix <textfile.csv)
Note the use of dos2unix to convert possible DOS style line breaks (CRLF i.e. "\r\n") and UTF-16 encoding (with byte order mark) to "\n" and UTF-8 (without byte order mark), respectively. Standard CSV files use CRLF as line break, see Wikipedia.
If the input may contain multiline fields, you can use the following script. Note the use of special string for separating records in output (since the default separator newline could occur within a record). Again, the following example prints the 3rd column (via c=3) of textfile.csv:
#!/bin/bash
gawk -- '
BEGIN{
RS="\0" # Read the whole input file as one record;
# assume there is no null character in input.
FS="" # Suppose this setting eases internal splitting work.
ORS="\n####\n" # Use a special output separator to show borders of a record.
}
{
nof=patsplit($0, a, /([^,"\n]*)|("(("")*[^"]*)*")/, seps)
field=0;
for (i=1; i<=nof; i++){
field++
if (field==c) {
if (substr(a[i], 1, 1) == "\"") {
a[i] = substr(a[i], 2, length(a[i]) - 2) # Get the text within
# the two quotes.
gsub(/""/, "\"", a[i]) # Normalize double quotes.
}
print a[i]
}
if (seps[i]!=",") field=0
}
}
' c=3 < <(dos2unix <textfile.csv)
There is another approach to the problem. csvquote can output contents of a CSV file modified so that special characters within field are transformed so that usual Unix text processing tools can be used to select certain column. For example the following code outputs the third column:
csvquote textfile.csv | cut -d ',' -f 3 | csvquote -u
csvquote can be used to process arbitrary large files.
I needed proper CSV parsing, not cut / awk and prayer. I'm trying this on a mac without csvtool, but macs do come with ruby, so you can do:
echo "require 'csv'; CSV.read('new.csv').each {|data| puts data[34]}" | ruby
I wonder why none of the answers so far have mentioned csvkit.
csvkit is a suite of command-line tools for converting to and working
with CSV
csvkit documentation
I use it exclusively for csv data management and so far I have not found a problem that I could not solve using cvskit.
To extract one or more columns from a cvs file you can use the csvcut utility that is part of the toolbox. To extract the second column use this command:
csvcut -c 2 filename_in.csv > filename_out.csv
csvcut reference page
If the strings in the csv are quoted, add the quote character with the q option:
csvcut -q '"' -c 2 filename_in.csv > filename_out.csv
Install with pip install csvkit or sudo apt install csvkit.
Simple solution using awk. Instead of "colNum" put the number of column you need to print.
cat fileName.csv | awk -F ";" '{ print $colNum }'
csvtool col 2 file.csv
where 2 is the column you are interested in
you can also do
csvtool col 1,2 file.csv
to do multiple columns
You can't do it without a full CSV parser.
If you know your data will not be quoted, then any solution that splits on , will work well (I tend to reach for cut -d, -f1 | sed 1d), as will any of the CSV manipulation tools.
If you want to produce another CSV file, then xsv, csvkit, csvtool, or other CSV manipulation tools are appropriate.
If you want to extract the contents of one single column of a CSV file, unquoting them so that they can be processed by subsequent commands, this Python 1-liner does the trick for CSV files with headers:
python -c 'import csv,sys'$'\n''for row in csv.DictReader(sys.stdin): print(row["message"])'
The "message" inside of the print function selects the column.
If the CSV file doesn't have headers:
python -c 'import csv,sys'$'\n''for row in csv.reader(sys.stdin): print(row[1])'
Python's CSV library supports all kinds of CSV dialects, so if your CSV file uses different conventions, it's possible to support them with relatively little change to the code.
Been using this code for a while, it is not "quick" unless you count "cutting and pasting from stackoverflow".
It uses ${##} and ${%%} operators in a loop instead of IFS. It calls 'err' and 'die', and supports only comma, dash, and pipe as SEP chars (that's all I needed).
err() { echo "${0##*/}: Error:" "$#" >&2; }
die() { err "$#"; exit 1; }
# Return Nth field in a csv string, fields numbered starting with 1
csv_fldN() { fldN , "$1" "$2"; }
# Return Nth field in string of fields separated
# by SEP, fields numbered starting with 1
fldN() {
local me="fldN: "
local sep="$1"
local fldnum="$2"
local vals="$3"
case "$sep" in
-|,|\|) ;;
*) die "$me: arg1 sep: unsupported separator '$sep'" ;;
esac
case "$fldnum" in
[0-9]*) [ "$fldnum" -gt 0 ] || { err "$me: arg2 fldnum=$fldnum must be number greater or equal to 0."; return 1; } ;;
*) { err "$me: arg2 fldnum=$fldnum must be number"; return 1;} ;;
esac
[ -z "$vals" ] && err "$me: missing arg2 vals: list of '$sep' separated values" && return 1
fldnum=$(($fldnum - 1))
while [ $fldnum -gt 0 ] ; do
vals="${vals#*$sep}"
fldnum=$(($fldnum - 1))
done
echo ${vals%%$sep*}
}
Example:
$ CSVLINE="example,fields with whitespace,field3"
$ $ for fno in $(seq 3); do echo field$fno: $(csv_fldN $fno "$CSVLINE"); done
field1: example
field2: fields with whitespace
field3: field3
You can also use while loop
IFS=,
while read name val; do
echo "............................"
echo Name: "$name"
done<itemlst.csv

Resources