Manipulate csv data using bash [duplicate] - bash

This question already has answers here:
What's the most robust way to efficiently parse CSV using awk?
(6 answers)
Closed 3 years ago.
I have a CSV file with several rows and columns, some rows have 4 columns and some have 5 columns. I want to add to those with 4 columns, one more column so all the rows have 5 columns. The info that I must add must be in the 3rd row, or at the end.
The CSV file looks like this:
name;ip;supplier;os;manufact
How can this be done in bash?

You can use read to split the CSV, then output the desired number of columns.
cat test.csv | while read line; do
IFS=\; read -ra COL <<< "$line"
echo "${COL[0]};${COL[1]};${COL[2]};${COL[3]};${COL[4]};"
done
For example, with test.csv containing:
1;2;3;4
1;2;3;4;5
1;2;3;4;
1;2;3;4;5;
The above script outputs:
1;2;3;4;;
1;2;3;4;5;
1;2;3;4;;
1;2;3;4;5;

Related

Cut columns two and three using bash [duplicate]

This question already has answers here:
How to extract one column of a csv file
(18 answers)
Closed 1 year ago.
I have a .csv file with three columns. I want to keep the first column only. I have been trying to work with a command similar to the one below.
cut -f 1,4 output.csv > output.txt
No matter what I do, my output remains the same- giving me all three columns. Can anyone give me some insight?
Thanks!
read file one line at a time, trim everything right of that first comma:
while read -r line; do echo ${line%%,*}; done < output.csv > output.txt

How to extract the the data with delimiter [duplicate]

This question already has answers here:
Extract specific columns from delimited file using Awk
(8 answers)
Closed 4 years ago.
I am parsing the csv file using while loop in shell script which has data in below format
ba04ba54,1234,MMS,"[""some"", ""somet2"", ""somet""]",21556,48834
code:
while IFS=, read -r id mvid conkey values cretime modified;
do
echo $id,$mvid,$conkey,$values,$cretime,$modified
but values is assigned with
"[""some""
instead of
"[""some"", ""somet2"", ""somet""]" how to achieve this using shell script
Could you please try following.
awk 'match($0,/\"\[\".*\]\"/){print substr($0,RSTART,RLENGTH)}' Input_file

Bash to turn files of lines into an n column CSV [duplicate]

This question already has answers here:
How to convert rows to columns in unix
(5 answers)
Transpose rows into column in unix
(7 answers)
Closed 4 years ago.
I have data like this:
A
B
C D
E
F
G
H I
I want it to look like this:
A,B,C
D,E,F
G,H,I
How can I achieve this using command-line tools?
This question has each data-cell on its own line: How to convert rows to columns in unix
Following command could help you.
xargs -n 3 < Input_file | tr ' ' ','

Keep text file rows by line number in bash [duplicate]

This question already has answers here:
Print lines indexed by a second file
(4 answers)
Closed 8 years ago.
Have two files. the first file (called k.txt) looks like this
lineTTY
lineRTU
lineERT
.....furtherline like this...
The other file (called w.txt) contains indices of rows which shall be kept. It looks like:
2
9
12
The indices in the latter file are sorted. Is there a way to do that in bash quickly as my file is large over 1 million rows?
Every line is the row of a matrix in a text file and only specific rows specified in the other file should be in the matrix.
I think you need here is:
cat w.txt | xargs -i{} sed -n '{}p' k.txt
if you must also sort a file, then
sort -g w.txt | xargs -i{} sed -n '{}p' k.txt

Shell line command Sorting command [duplicate]

This question already has answers here:
find difference between two text files with one item per line [duplicate]
(11 answers)
Closed 9 years ago.
I have a Masters.txt (all records) and a New.txt file. I want to process New.txt against Masters.txt and output all the lines from New.txt that do not exist in Masters.txt
i'm not sure if this is something the sort -u command can do.
Sort both files first using sort and then use the comm command to list the lines that exist only in new.txt and not in masters.txt. Something like:
sort masters.txt >masters_sorted.txt
sort new.txt >new_sorted.txt
comm -2 -3 new_sorted.txt masters_sorted.txt
comm produces three columns in its output by default; column 1 contains lines unique to the first file, column 2 contains lines unique to the second file; column 3 contains lines common to both files. The -2 -3 switches suppress the second and third columns.
see the linux comm command:
http://unstableme.blogspot.com/2009/08/linux-comm-command-brief-tutorial.html

Resources