Bash to turn files of lines into an n column CSV [duplicate] - bash

This question already has answers here:
How to convert rows to columns in unix
(5 answers)
Transpose rows into column in unix
(7 answers)
Closed 4 years ago.
I have data like this:
A
B
C D
E
F
G
H I
I want it to look like this:
A,B,C
D,E,F
G,H,I
How can I achieve this using command-line tools?
This question has each data-cell on its own line: How to convert rows to columns in unix

Following command could help you.
xargs -n 3 < Input_file | tr ' ' ','

Related

How can I delete all lines after a specific string from a number of files [duplicate]

This question already has answers here:
removing lines between two patterns (not inclusive) with sed
(6 answers)
How do you run a command eg chmod, for each line of a file?
(9 answers)
Closed 7 months ago.
I have n files, like:
file1:
Hannah
Lars
Test 123
1aaa
2eee
file2:
Mike
Charly
Stephanie
Earl
Test 123
3ccc
4ddd
5eee
I want to remove all rows after "Test 123" from all n files.
The number of rows to delete varies between files.
There's a very similar question How can I delete all lines before a specific string from a number of files in which sed -i.bak '1,/Test 123/d' file* works perfectly, but how can I do it for all lines after a specific string?
Thanks!
Despite the comments and closed question, nothing in other threads worked. Figured out a solution:
for FILENAME in * ; do sed -i.bak '/Test 123/q' $FILENAME ; done

Manipulate csv data using bash [duplicate]

This question already has answers here:
What's the most robust way to efficiently parse CSV using awk?
(6 answers)
Closed 3 years ago.
I have a CSV file with several rows and columns, some rows have 4 columns and some have 5 columns. I want to add to those with 4 columns, one more column so all the rows have 5 columns. The info that I must add must be in the 3rd row, or at the end.
The CSV file looks like this:
name;ip;supplier;os;manufact
How can this be done in bash?
You can use read to split the CSV, then output the desired number of columns.
cat test.csv | while read line; do
IFS=\; read -ra COL <<< "$line"
echo "${COL[0]};${COL[1]};${COL[2]};${COL[3]};${COL[4]};"
done
For example, with test.csv containing:
1;2;3;4
1;2;3;4;5
1;2;3;4;
1;2;3;4;5;
The above script outputs:
1;2;3;4;;
1;2;3;4;5;
1;2;3;4;;
1;2;3;4;5;

Delete lines containing a given string at different positions in different files [duplicate]

This question already has an answer here:
find and remove a line in multiple files
(1 answer)
Closed 5 years ago.
As an example below, let's say the line I want to delete contains the string "X". How do I delete that line in each respective file, say, in a loop? Can I do this using grep and sed, or any other shell/bash program for that matter?
Line File 1 File 2
1 A A
2 B B
3 X C
4 C X
5 D D
6 E X
7 X E
In case you want to simple remove a line which has character "X" in it then this Simple grep command to rescue here:
grep -v "X" Input_file
Adding sed solution too now, which will change Input_file itself:
sed -i.bak '/X/d' file*

Keep text file rows by line number in bash [duplicate]

This question already has answers here:
Print lines indexed by a second file
(4 answers)
Closed 8 years ago.
Have two files. the first file (called k.txt) looks like this
lineTTY
lineRTU
lineERT
.....furtherline like this...
The other file (called w.txt) contains indices of rows which shall be kept. It looks like:
2
9
12
The indices in the latter file are sorted. Is there a way to do that in bash quickly as my file is large over 1 million rows?
Every line is the row of a matrix in a text file and only specific rows specified in the other file should be in the matrix.
I think you need here is:
cat w.txt | xargs -i{} sed -n '{}p' k.txt
if you must also sort a file, then
sort -g w.txt | xargs -i{} sed -n '{}p' k.txt

Shell line command Sorting command [duplicate]

This question already has answers here:
find difference between two text files with one item per line [duplicate]
(11 answers)
Closed 9 years ago.
I have a Masters.txt (all records) and a New.txt file. I want to process New.txt against Masters.txt and output all the lines from New.txt that do not exist in Masters.txt
i'm not sure if this is something the sort -u command can do.
Sort both files first using sort and then use the comm command to list the lines that exist only in new.txt and not in masters.txt. Something like:
sort masters.txt >masters_sorted.txt
sort new.txt >new_sorted.txt
comm -2 -3 new_sorted.txt masters_sorted.txt
comm produces three columns in its output by default; column 1 contains lines unique to the first file, column 2 contains lines unique to the second file; column 3 contains lines common to both files. The -2 -3 switches suppress the second and third columns.
see the linux comm command:
http://unstableme.blogspot.com/2009/08/linux-comm-command-brief-tutorial.html

Resources