Convert csv to html table using - shell

How can I convert a CSV file into html table?
I got a csv file with comma "," and I want this file to convert to Html table.

OK, you really want it only in bash? Mission accomplished.
cat > input.csv
a,b,c
d,e,f
g,h,i
echo "<table>" ; while read INPUT ; do echo "<tr><td>${INPUT//,/</td><td>}</td></tr>" ; done < input.csv ; echo "</table>"
<table>
<tr><td>a</td><td>b</td><td>c</td></tr>
<tr><td>d</td><td>e</td><td>f</td></tr>
<tr><td>g</td><td>h</td><td>i</td></tr>
</table>
My first try used "cat" but I figured that was cheating, so I rewrote it using "while read"

XmlGrid.net has a nice tool to convert a CSV file into HTML table. Here is the link:
http://xmlgrid.net/csvToHtml.html

Related

Loop for n lines - create csv

I've a text file that look likes this :
data1-1
data1-2
data1-3
data2-1
data2-2
data2-3
data3-1
data3-2
data3-3
And I want to transform it to a csv that look like :
data1-1,data1-2,data1-3
data2-1,data2-2,data2-3
data3-1,data3-2,data3-3
What is the best way to solve this problem? I can create my csv with with a echo command
echo "object1,object2,object3" > test.csv
But after that, I'm not sure about what is the best loop method. Please advise. Thanks.
paste -d "," - - - <file >test.csv
Output to test.csv:
data1-1,data1-2,data1-3
data2-1,data2-2,data2-3
data3-1,data3-2,data3-3

How to split a CSV file into multiple files based on column value

I have CSV file which could look like this:
name1;1;11880
name2;1;260.483
name3;1;3355.82
name4;1;4179.48
name1;2;10740.4
name2;2;1868.69
name3;2;341.375
name4;2;4783.9
there could more or less rows and I need to split it into multiple .dat files each containing rows with the same value of the second column of this file. (Then I will make bar chart for each .dat file) For this case it should be two files:
data1.dat
name1;1;11880
name2;1;260.483
name3;1;3355.82
name4;1;4179.48
data2.dat
name1;2;10740.4
name2;2;1868.69
name3;2;341.375
name4;2;4783.9
Is there any simple way of doing it with bash?
You can use awk to generate a file containing only a particular value of the second column:
awk -F ';' '($2==1){print}' data.dat > data1.dat
Just change the value in the $2== condition.
Or, if you want to do this automatically, just use:
awk -F ';' '{print > ("data"$2".dat")}' data.dat
which will output to files containing the value of the second column in the name.
Try this:
while IFS=";" read -r a b c; do echo "$a;$b;$c" >> data${b}.dat; done <file

combine lines of csv in bash

I want to create new csv file for each city combining several csv with rows and columns, one column has the name of cities, that repeat in all the csv files...
For example,
I have files with the name of the date,YYYYMMDD, 20140713.csv, 20140714.csv, 20140715.csv...
They have the same structure, same numbers of rows and columns, for example, 20140713.csv...
1. City, Data, TMinreal, TMaxreal, TMinext, TMaxext, DiffTMin, DiffTMax
2. Milano,20140714,19.0,28.8,18,27,1,1.8
3. Rome,20140714,18.1,29.3,14,29,4.1,0.3
4. Pisa,20140714,10.8,27.5,8,29,2.8,-1.5
5. Venecia,20140714,21.1,29.1,16,27,5.1,2.1
I want to combine all these csv files...and get, csv files with the name of the city, as Milano.csv and inside with the information about this city stored in all the csv combined.
For example, if I combine 20140713.csv, 20140714.csv, 20140715.csv, for Milano.csv
1. Milano,20140713,19.0,28.8,18,26,1,2.8
2. Milano,20140714,19.0,28.8,20,27,-1,1.8
3. Milano,20140715,21.0,26.8,19,27,2,-0.2
any idea? thank you
untested, but this should work:
awk -F, 'FNR==1{next} {file = $1".csv"; print > file}' 20*.csv
You can have this bash script:
#!/bin/bash
for FILE; do
{
read ## Skip header
while IFS=, read -r A B; do
echo "$A,$B" >> "$A".csv
done
} < "$FILE"
done
Then run as:
bash script.sh file1.csv file2.csv ...

I want my bash script output in html format?

I am parsing the csv file using bash script, my output will be in tabular form with number of rows and coloums, so when i redirect my output to text file alignment mismatch and look so messy.
Can anyone guide me how to redirect my output to html format or suggest me with anyother alternative way.
Thanks in advance
If you don't really need the output in HTML, but you're having trouble with column alignment using tabs, you can get good column alignment with printf.
By the way, it would help if your question included some sample input, the script that you're using to parse and output it and some sample output.
Here is a simple demonstration of printf:
$ cat file
example text,123,word,23.12
more text,1004,long sequence of words,1.1
text,1,a,1000.42
$ cat script
#!/bin/bash
headformat='%-*s%-*s%*s%*s\n'
format='%-*s%-*s%*d%*.*f\n'
modwidth=16; descwidth=24; qtywidth=6; pricewidth=10
printf "$headformat" "$modwidth" Model "$descwidth" Desc. "$qtywidth" Qty "$pricewidth" Price
while IFS=, read model quantity description price
do
printf "$format" "$modwidth" "$model" "$descwidth" "$description" "$qtywidth" "$quantity" "$pricewidth" 2 "$price"
done < file
$ ./script
Model Desc. Qty Price
example text word 123 23.12
more text long sequence of words 1004 1.10
text a 1 1000.42
Write it out in TSV, then have a XSLT stylesheet convert it from TSV to XHTML. You can use $'\t' in bash to produce a tab character.
A simple solution wold use column(1):
column -t -s, <( echo "head1,head2,head3,head4"; cat csv.dat )
with a result like this one:
head1 head2 head3 head4
aaaa 33333 bbb 123
aaa 333333 bbbx 123
aa 3333333 bbbxx 123
a 33333333 bbbxxx 123

Bash script to convert a date and time column to unix timestamp in .csv

I am trying to create a script to convert two columns in a .csv file which are date and time into unix timestamps. So i need to get the date and time column from each row, convert it and insert it into an additional column at the end containing the timestamp.
Could anyone help me? So far i have discovered the unix command to convert any give time and date to unixstamp:
date -d "2011/11/25 10:00:00" "+%s"
1322215200
I have no experience with bash scripting could anyone get me started?
Examples of my columns and rows:
Columns: Date, Time,
Row 1: 25/10/2011, 10:54:36,
Row 2: 25/10/2011, 11:15:17,
Row 3: 26/10/2011, 01:04:39,
Thanks so much in advance!
You don't provide an exerpt from your csv-file, so I'm using this one:
[foo.csv]
2011/11/25;12:00:00
2010/11/25;13:00:00
2009/11/25;19:00:00
Here's one way to solve your problem:
$ cat foo.csv | while read line ; do echo $line\;$(date -d "${line//;/ }" "+%s") ; done
2011/11/25;12:00:00;1322218800
2010/11/25;13:00:00;1290686400
2009/11/25;19:00:00;1259172000
(EDIT: Removed an uneccessary variable.)
(EDIT2: Altered the date command so the script actually works.)
this should do the job:
awk 'BEGIN{FS=OFS=", "}{t=$1" "$2; "date -d \""t"\" +%s"|getline d; print $1,$2,d}' yourCSV.csv
note
you didn't give any example. and you mentioned csv, so I assume that the column separator in your file should be "comma".
test
kent$ echo "2011/11/25, 10:00:00"|awk 'BEGIN{FS=OFS=", "}{t=$1" "$2; "date -d \""t"\" +%s"|getline d; print $1,$2,d}'
2011/11/25, 10:00:00, 1322211600
Now two imporvements:
First: No need for cat foo.csv, just stream that via < foo.csv into the while loop.
Second: No need for echo & tr to create the date stringformat. Just use bash internal pattern and substitute and do it inplace
while read line ; do echo ${line}\;$(date -d "${line//;/ }" +'%s'); done < foo.csv

Resources