How to convert the first letter of all words of several columns of a csv file to uppercase while making the rest of the letters lowercase? - bash

Bash 4.4.0
Ubuntu 16.04
I have several columns in a CSV file that are all capital letters and some are lowercase. Some columns have only one word while others may have 50 words. At this time, I convert column by column with 2 commands and it is quite taxing on the server when the file has 50k lines.
Example:
#-- Place the header line in a temp file
head -n 1 "$tmp_input1" > "$tmp_input3"
#-- Remove the header line in orginal file
tail -n +2 "$tmp_input1" > "$tmp_input1-temp" && mv "$tmp_input1-temp" "$tmp_input1"
#-- Change the words in the 11th column to lower case then change the first leter to upper case
awk -F"," 'BEGIN{OFS=","} {$11 = tolower($11); print}' "$tmp_input4" > "$tmp_input5"
sed -i "s/\b\(.\)/\u\1/g" "$tmp_input5"
#-- Change the words in the 12th column to lower case then change the first leter to upper case
awk -F"," 'BEGIN{OFS=","} {$12 = tolower($12); print}' "$tmp_input5" > "$tmp_input6"
sed -i "s/\b\(.\)/\u\1/g" "$tmp_input6"
#-- Change the words in the 13th column to lower case then change the first leter to upper case
awk -F"," 'BEGIN{OFS=","} {$13 = tolower($13); print}' "$tmp_input6" > "$tmp_input7"
sed -i "s/\b\(.\)/\u\1/g" "$tmp_input7"
cat "$tmp_input7" >> "$tmp_input3"
Is it possible to do multiple columns in a single command?
Here is an example of the csv file:
"dealer_id","vin","conditon","stocknumber","make","model","year","broken","trim","bodystyle","color","interiorcolor","interiorfabric","engine","enginedisplacement","engineaspiration","engineText","transmission","drivetrain","mpgcity","mpghighway","mileage","cylinders","fuelconditon","optiontext","description","titlestatus","warranty","price","specialprice","window_sticker_price","mirrorhangerprice","images","ModelCode","PackageCodes"
"JOHNVANC04A","2C4RC1N73JR290946","N","JR290946","Chrysler","Pacifica","2018","","Hybrid Limited FWD","Mini-van, Passenger","Brilliant BLACK Crystal PEARL Coat","","..LEATHER SEATS..","V6 Cylinder Engine","3.6L","","","AUTOMATIC","FWD","0","0","553","6","H","..1-SPEED A/T..,..AUTO-OFF HEADLIGHTS..,..BACK-UP CAMERA..,..COOLED DRIVER SEAT..,..CRUISE CONTROL..","======KEY FEATURES INCLUDE: . LEATHER SEATS. THIRD ROW SEAT. QUAD BUCKET SEATS. REAR AIR. HEATED DRIVER SEAT.","","0","41680","","48830","","http://i.autoupktech.com/c640/9c40231cbcfa4ef89425d108e4e3a410.jpg",http://i.autoupnktech.com/c640/9c40231cbcfa4ef89425d108e4e3a410.jpg","RUES53","AAX,AT2,DFQ,EH3,GWM,WPU"
Here's a snippet of the above columns refined
Column 11 should be - "Brilliant Black Crystal Pearl Coat"
Column 13 should be - "Leather Seats"
Column 16 should be - "Automatic"
Column 23 should be - "1-Speed A/T,Auto-Off Headlights,Back-up Camera"
Column 24 should be - "Key Features Include: Leather Seats,Third Row Seat"
Keep in mind, the double-quotes surrounding the columns can't be stripped. I only need to convert certain columns and not the entire file. Here's an example of the columns 11, 13, 16, 23 and 24 converted.
"Brilliant Black Crystal Pearl Coat","Leather Seats","Automatic","1-Speed A/T,Auto-Off Headlights,Back-up Camera","Key Features Include: Leather Seats,Third Row Seat"

Just to add another option, here is a one liner using just sed:
sed -i -e 's/.*/\L&/' -e 's/[a-z]*/\u&/g' filename
And here is a proof of concept:
$ cat testfile
jUSt,a,LONG,list of SOME,RAnDoM WoRDs
ANother LIne
OneMore,LiNe
$ sed -e 's/.*/\L&/' -e 's/[a-z]*/\u&/g' testfile
Just,A,Long,List Of Some,Random Words
Another Line
Onemore,Line
$
If you want to convert just the headers of the CSV file (first line), just replace s with 1s on both search patterns.
You can find an excellent article explaining the magic here: sed – Convert to Title Case.

Here is another alternative (off-topic here, I know) in Python 3:
import csv
from pathlib import Path
infile = Path('infile.csv')
outfile = Path('outfile.csv')
titled_cols = [10, 12, 15, 22, 23]
titled_data = []
with infile.open() as fin, outfile.open('w', newline='') as fout:
for row in csv.reader(fin, quoting=csv.QUOTE_ALL):
for i,col in enumerate(row):
if i in titled_cols:
col = col.title()
titled_data.append(row)
csv.writer(fout, quoting=csv.QUOTE_ALL).writerows(titled_data)
Just define the columns you want to be title cased on titled_cols (columns have zero based indexes) and it will do what you want.
I guess infile and outfile are self-explanatory and outfile will contain the modified version of your original file.
I hope it helps.

You could create a user-defined function and apply it to the columns you need to modify.
awk -F, 'function toproper(s) { return toupper(substr(s, 1, 1)) tolower(substr(s, 2, length(s))) } {printf("%s,%s,%s,%s\n", toproper($1), toproper($2), toproper($3), toproper($4));}'
Input:
FOO,BAR,BAZ,ETC
Output:
Foo,Bar,Baz,Etc

Assuming the fields of the csv file are not quoted by double quotes,
meaning that we can simply split a record on commas and whitespaces, how
about a Perl solution:
perl -pe 's/(^|(?<=[,\s]))([^,\s])([^,\s]*)((?=[,\s])|$)/\U$2\L$3/g' input.csv
input.csv:
Bash,4.4.0,Ubuntu,16.04
I have several columns in a CSV file,that, are, all capital letters
and some are lowercase.
Some columns have only,one,word,while others may have 50 words.
output:
Bash,4.4.0,Ubuntu,16.04
I Have Several Columns In A Csv File,That, Are, All Capital Letters
And Some Are Lowercase.
Some Columns Have Only,One,Word,While Others May Have 50 Words.

This version uses AWK to do the job:
This is the command (change file to your filename)
awk -F"," 'BEGIN{OFS=","}{ for (i=1; i<=NF; i++) { $i=toupper(substr($i,1,1))""tolower(substr($i,2,length($i)))}print $0}' file | awk -F" " 'BEGIN{OFS=" "} { for (i=1; i<=NF; i++) { $i=toupper(substr($i,1,1))""substr($i,2,length($i))}print $0}'
The test:
cat file
pepe is cool,ASDASD ASDAS,and no podpoiaops
awk -F"," 'BEGIN{OFS=","}{ for (i=1; i<=NF; i++) { $i=toupper(substr($i,1,1))""tolower(substr($i,2,length($i)))}print $0}' file | awk -F" " 'BEGIN{OFS=" "} { for (i=1; i<=NF; i++) { $i=toupper(substr($i,1,1))""substr($i,2,length($i))}print $0}'
Pepe Is Cool,Asdasd Asdas,And No Podpoiaops
Explanation
BEGIN{OFS=","} tells awk how to outuput the line.
The for statement uses NF, the built in internal variable for the
number of fields for each line
The substr divide and change the first letter of the field, and it's assigned to its line value again
All row is printed print $0
Finally, the second awk divides the lines created on the first example, but this time dividing with spaces as separator. This way, It detects all different words on the file, and changes every first Character of them.
Hope it helps

Related

Searching for a string between two characters

I need to find two numbers from lines which look like this
>Chr14:453901-458800
I have a large quantity of those lines mixed with lines that doesn't contain ":" so we can search for colon to find the line with numbers. Every line have different numbers.
I need to find both numbers after ":" which are separated by "-" then substract the first number from the second one and print result on the screen for each line
I'd like this to be done using awk
I managed to do something like this:
awk -e '$1 ~ /\:/ {print $0}' file.txt
but it's nowhere near the end result
For this example i showed above my result would be:
4899
Because it is the result of 458800 - 453901 = 4899
I can't figure it out on my own and would appreciate some help
With GNU awk. Separate the row into multiple columns using the : and - separators. In each row containing :, subtract the contents of column 2 from the contents of column 3 and print result.
awk -F '[:-]' '/:/{print $3-$2}' file
Output:
4899
Using awk
$ awk -F: '/:/ {split($2,a,"-"); print a[2] - a[1]}' input_file
4899

Unix Shell Scripting-how can i remove particular characers inside a text file?

I have an one text file. This file has 5 rows and 5 columns. All the columns are separated by "|" (symbol). In that 2nd column(content) length should be 7 characters.
If 2nd column length is more than 7 characters. Then,I want to remove those extra characters without opening that file.
For example:
cat file1
ff|hahaha1|kjbsb|122344|jbjbnjuinnv|
df|hadb123_udcvb|sbfuisdbvdkh|122344|jbjbnjuinnv|
gf|harayhe_jnbsnjv|sdbvdkh|12234|jbjbnj|
qq|kkksks2|datetag|7777|jbjbnj|
jj|harisha|hagte|090900|hags|
For the above case 2nd and 3rd rows having 2nd column length is more than 7 characters. Now i want to remove those extra characters without open the input file using awk or sed command
I'm waiting for your responses guys.
Thanks in advance!!
Take a substring of length 7 from the second column with awk:
awk -F'|' -v OFS='|' '{ $2 = substr($2, 1, 7) }1' file
Now any strings longer than 7 characters will be made shorter. Any strings that were shorter will be left as they are.
The 1 at the end is the shortest true condition to trigger the default action, { print }.
If you're happy with the changes, then you can overwrite the original file like this:
awk -F'|' -v OFS='|' '{ $2 = substr($2, 1, 7) }1' file > tmp && mv tmp file
i.e. redirect to a temporary file and then overwrite the original.
First try
sed 's/\(^[^|]*|[^|]\{7\}\)[^|]*/\1/' file1
What is happening here? We construct the command step-by-step:
# Replace something
sed 's/hadb123_udcvb/replaced/' file1
# Remember the matched string (will be used in a later command)
sed 's/\(hadb123_udcvb\)/replaced/' file1
# Replace a most 7 characters without a '|' (one time each line)
sed 's/\([^|]\{7\}\)/replaced/' file1
# Remove additional character until a '|'
sed 's/\([^|]\{7\}\)[^|]*/replaced/' file1
# Put back the string you remembered
sed 's/\([^|]\{7\}\)[^|]*/\1/' file1
# Extend teh matched string with Start-of-line (^), any-length first field, '|'
sed 's/\(^[^|]*|[^|]\{7\}\)[^|]*/\1/' file1
When this shows the desired output, you can add the option -i for changing the input file:
sed -i 's/\(^[^|]*|[^|]\{7\}\)[^|]*/\1/' file1

print 3 consecutive column after specific string from CSV

I need to print 2 columns after specific string (in my case it is 64). There can be multiple instances of 64 within same CSV row, however next instance will not occur within 3 columns of previous occurrence. Output of each instance should be in next line and unique. The problem is, the specific string does not fall in same column for all rows. All row is having kind of dynamic data and there is no header for CSV. Let say, below is input file (its just a sample, actual file is having approx 300 columns & 5 Million raws):
00:TEST,123453103279586,ABC,XYZ,123,456,65,906,06149,NIL TS21,1,64,906,06149,NIL TS22,1,64,916,06149,NIL BS20,1,64,926,06149,NIL BS30,1,64,906,06149,NIL CAML,1,ORIG,0,TERM,1,1,1,6422222222
00:TEST,123458131344169,ABC,XYZ,123,456,OCCF,1,1,1,64,857,19066,NIL TS21,1,64,857,19066,NIL TS22,1,64,857,19066,NIL BS20,1,64,857,19067,NIL BS30,1,64,857,19068,NIL PSS,1,E2 EPSDATA,GRANTED,NONE,1,N,N,256000,5
00:TEST,123458131016844,ABC,XYZ,123,456,HOLD,,1,64,938,36843,NIL TS21,1,64,938,36841,NIL TS22,1,64,938,36823,NIL BS20,1,64,938,36843,NIL BS30,1,64,938,36843,NIL CAML,1,ORIG,0,TERM,00,50000,N,N,N,N
00:TEST,123453102914690,ABC,XYZ,123,456,HOLD,,1,PBS,TS11,64,938,64126,NIL TS21,1,64,938,64126,NIL TS22,1,64,938,64126,NIL BS20,1,64,938,64226,NIL BS30,1,64,938,64326,NIL CAML,1,ORIG,0,TERM,1,1,1,6422222222,2222,R
Output required(only unique entries):
64,906,06149
64,857,19066
64,857,19067
64,857,19068
64,938,36843
64,938,36841
64,938,36823
64,938,36843
64,938,36843
64,938,64326
There is no performance related concerns. I have tried to search many threads but could not get anything near related. Please help.
We can use a pipe of two commands... first to put the 64's leading on a line and a second to print first three columns if we see a leading 64.
sed 's/,64[,\n]/\n64,/g' | awk -F, '/^64/ { print $1 FS $2 FS $3 }'
There are ways of doing this with a single awk command, but this felt quick and easy to me.
Though the sample data from the question contains redundant lines, karakfa (see below) reminds me that the question speaks of a "unique data" requirement. This version uses the keys of an associative array to keep track of duplicate records.
sed 's/,64[,\n]/\n64,/g' | awk -F, 'BEGIN { split("",a) } /^64/ && !((x=$1 FS $2 FS $3) in a) { a[x]=1; print x }'
gawk:
awk -F, '{for(i=0;++i<=NF;){if($i=="64")a=4;if(--a>0)s=s?s","$i:$i;if(a==1){print s;s=""}}}' file
Sed for fun
sed -n -e 's/$/,n,n,n/' -e ':a' -e 'G;s/[[:blank:],]\(64,.*\)\(\n\)$/\2\1/;s/.*\(\n\)\(64\([[:blank:],][^[:blank:],]\{1,\}\)\{2\}\)\([[:blank:],][^[:blank:],]\{1,\}\)\{3\}\([[:blank:],].*\)\{0,1\}$/\1\2\1\5/;s/^.*\n\(.*\n\)/\1/;/^64.*\n/P;s///;ta' YourFile | sort -u
assuming column are separated by blank space or comma
need a sort -u for uniq (possible in sed but a new "simple" action of the same kind to add in this case)
awk to the rescue!
$ awk -F, '{for(i=1;i<=NF;i++)
if($i==64)
{k=$i FS $(++i) FS $(++i);
if (!a[k]++)
print k
}
}' file
64,906,06149
64,916,06149
64,926,06149
64,857,19066
64,857,19067
64,857,19068
64,938,36843
64,938,36841
64,938,36823
64,938,64126
64,938,64226
64,938,64326
ps. your sample output doesn't match the given input.

Split a big txt file to do grep - unix

I work (unix, shell scripts) with txt files that are millions field separate by pipe and not separated by \n or \r.
something like this:
field1a|field2a|field3a|field4a|field5a|field6a|[...]|field1d|field2d|field3d|field4d|field5d|field6d|[...]|field1m|field2m|field3m|field4m|field5m|field6m|[...]|field1z|field2z|field3z|field4z|field5z|field6z|
All text is in the same line.
The number of fields is fixed for every file.
(in this example I have field1=name; field2=surname; field3=mobile phone; field4=email; field5=office phone; field6=skype)
When I need to find a field (ex field2), command like grep doesn't work (in the same line).
I think that a good solution can be do a script that split every 6 field with a "\n" and after do a grep. I'm right? Thank you very much!
With awk :
$ cat a
field1a|field2a|field3a|field4a|field5a|field6a|field1d|field2d|field3d|field4d|field5d|field6d|field1m|field2m|field3m|field4m|field5m|field6m|field1z|field2z|field3z|field4z|field5z|field6z|
$ awk -F"|" '{for (i=1;i<NF;i=i+6) {for (j=0; j<6; j++) printf $(i+j)"|"; printf "\n"}}' a
field1a|field2a|field3a|field4a|field5a|field6a|
field1d|field2d|field3d|field4d|field5d|field6d|
field1m|field2m|field3m|field4m|field5m|field6m|
field1z|field2z|field3z|field4z|field5z|field6z|
Here you can easily set the length of line.
Hope this helps !
you can use sed to split the line in multiple lines:
sed 's/\(\([^|]*|\)\{6\}\)/\1\n/g' input.txt > output.txt
explanation:
we have to use heavy backslash-escaping of (){} which makes the code slightly unreadable.
but in short:
the term (([^|]*|){6}) (backslashes removed for readability) between s/ and /\1, will match:
[^|]* any character but '|', repeated multiple times
| followed by a '|'
the above is obviously one column and it is grouped together with enclosing parantheses ( and )
the entire group is repeated 6 times {6}
and this is again grouped together with enclosing parantheses ( and ), to form one full set
the rest of the term is easy to read:
replace the above (the entire dataset of 6 fields) with \1\n, the part between / and /g
\1 refers to the "first" group in the sed-expression (the "first" group that is started, so it's the entire dataset of 6 fields)
\n is the newline character
so replace the entire dataset of 6 fields by itself followed by a newline
and do so repeatedly (the trailing g)
you can use sed to convert every 6th | to a newline.
In my version of tcsh I can do:
sed 's/\(\([^|]\+|\)\{6\}\)/\1\n/g' filename
consider this:
> cat bla
a1|b2|c3|d4|
> sed 's/\(\([^|]\+|\)\{6\}\)/\1\n/g' bla
a1|b2|
c3|d4|
This is how the regex works:
[^|] is any non-| character.
[^|]\+ is a sequence of at least one non-| characters.
[^|]\+| is a sequence of at least one non-| characters followed by a |.
\([^|]\+|\) is a sequence of at least one non-| characters followed by a |, grouped together
\([^|]\+|\)\{6\} is 6 consecutive such groups.
\(\([^|]\+|\)\{6\}\) is 6 consecutive such groups, grouped together.
The replacement just takes this sequence of 6 groups and adds a newline to the end.
Here is how I would do it with awk
awk -v RS="|" '{printf $0 (NR%7?RS:"\n")}' file
field1a|field2a|field3a|field4a|field5a|field6a|[...]
field1d|field2d|field3d|field4d|field5d|field6d|[...]
field1m|field2m|field3m|field4m|field5m|field6m|[...]
field1z|field2z|field3z|field4z|field5z|field6z|
Just adjust the NR%7 to number of field you to what suites you.
What about printing the lines on blocks of six?
$ awk 'BEGIN{FS=OFS="|"} {for (i=1; i<=NF; i+=6) {print $(i), $(i+1), $(i+2), $(i+3), $(i+4), $(i+5)}}' file
field1a|field2a|field3a|field4a|field5a|field6a
field1d|field2d|field3d|field4d|field5d|field6d
field1m|field2m|field3m|field4m|field5m|field6m
field1z|field2z|field3z|field4z|field5z|field6z
Explanation
BEGIN{FS=OFS="|"} set input and output field separator as |.
{for (i=1; i<=NF; i+=6) {print $(i), $(i+1), $(i+2), $(i+3), $(i+4), $(i+5)}} loop through items on blocks of 6. Every single time, print six of them. As print end up writing a new line, then you are done.
If you want to treat the files as being in multiple lines, then make \n the field separator. For example, to get the 2nd column, just do:
tr \| \\n < input-file | sed -n 2p
To see which columns match a regex, do:
tr \| \\n < input-file | grep -n regex

Bash Text file formatting

I have some files with the following format:
555584280113;01-04-2013 00:00:11;0,22;889;30008;1501;sms;/xxx/yyy/zzz
552185022741;01-04-2013 00:00:13;0,22;889;30008;1501;sms;/xxx/yyy/zzz
5511965271852;01-04-2013 00:00:14;0,22;889;30008;1501;sms;/xxx/yyy/zzz
5511980644500;01-04-2013 00:00:22;0,22;889;30008;1501;sms;/xxx/yyy/zzz
553186398559;01-04-2013 00:00:31;0,22;889;30008;1501;sms;/xxx/yyy/zzz
555584280113;01-04-2013 00:00:41;0,22;889;30008;1501;sms;/xxx/yyy/zzz
558487839822;01-04-2013 00:01:09;0,22;889;30008;1501;sms;/xxx/yyy/zzz
I need to have them with a sequence of 10 digits long at the beginning, removed the prefix 55 on the second column (which I have done with a simple sed 's/^55//g') and reformat the date to look like this:
0000000001;555584280113;20130401 00:00:11;0,22;889;30008;1501;sms;/xxx/yyy/zzz
0000000002;552185022741;20130401 00:00:13;0,22;889;30008;1501;sms;/xxx/yyy/zzz
0000000003;5511965271852;20130401 00:00:14;0,22;889;30008;1501;sms;/xxx/yyy/zzz
0000000004;5511980644500;20130401 00:00:22;0,22;889;30008;1501;sms;/xxx/yyy/zzz
0000000005;553186398559;20130401 00:00:31;0,22;889;30008;1501;sms;/xxx/yyy/zzz
0000000006;555584280113;01-04-2013 00:00:41;0,22;889;30008;1501;sms;/xxx/yyy/zzz
I have the date part in a separate way:
cat file.txt | cut -d\; -f2 | awk '{print $1}' |awk -v OFS="-" -F"-" '{print $3$2$1}'
And it works, but I don't know how to put all of them together, the sequence + sed for the prefix + change the date format. The sequence part I'm not even sure how to do it.
Thanks for the help.
awk is one of the best tool out there used for text parsing and formatting. Here is one way of meeting your requirements:
awk '
BEGIN { FS = OFS = ";" }
{
printf "%010d;", NR
$1 = substr($1,3)
split($2, tmp, /[- ]/)
$2=tmp[3]tmp[2]tmp[1]" "tmp[4]
}1' file
We set the input and output field separator to ;
We use printf to format your first column number requirement
We use substr function to remove the first two characters of column 1
We use split function to format the time
Using 1 we print rest of the statement as is.
Output:
0000000001;5584280113;20130401 00:00:11;0,22;889;30008;1501;sms;/xxx/yyy/zzz
0000000002;2185022741;20130401 00:00:13;0,22;889;30008;1501;sms;/xxx/yyy/zzz
0000000003;11965271852;20130401 00:00:14;0,22;889;30008;1501;sms;/xxx/yyy/zzz
0000000004;11980644500;20130401 00:00:22;0,22;889;30008;1501;sms;/xxx/yyy/zzz
0000000005;3186398559;20130401 00:00:31;0,22;889;30008;1501;sms;/xxx/yyy/zzz
0000000006;5584280113;20130401 00:00:41;0,22;889;30008;1501;sms;/xxx/yyy/zzz
0000000007;8487839822;20130401 00:01:09;0,22;889;30008;1501;sms;/xxx/yyy/zzz
If the name of the input file is input, then the following command removes the 55, adds a 10-digit line number, and rearranges the date. With GNU sed:
nl -nrz -w10 -s\; input | sed -r 's/55//; s/([0-9]{2})-([0-9]{2})-([0-9]{4})/\3\2\1/'
If one is using Mac OSX (or another OS without GNU sed), then a slight change is required:
nl -nrz -w10 -s\; input | sed -E 's/55//; s/([0-9]{2})-([0-9]{2})-([0-9]{4})/\3\2\1/'
Sample output:
0000000001;5584280113;20130401 00:00:11;0,22;889;30008;1501;sms;/xxx/yyy/zzz
0000000002;2185022741;20130401 00:00:13;0,22;889;30008;1501;sms;/xxx/yyy/zzz
0000000003;11965271852;20130401 00:00:14;0,22;889;30008;1501;sms;/xxx/yyy/zzz
0000000004;11980644500;20130401 00:00:22;0,22;889;30008;1501;sms;/xxx/yyy/zzz
0000000005;3186398559;20130401 00:00:31;0,22;889;30008;1501;sms;/xxx/yyy/zzz
0000000006;5584280113;20130401 00:00:41;0,22;889;30008;1501;sms;/xxx/yyy/zzz
0000000007;8487839822;20130401 00:01:09;0,22;889;30008;1501;sms;/xxx/yyy/zzz
How it works: nl is a handy *nix utility for adding line numbers. -w10 tells nl that we want 10 digit line numbers. -nrz tells nl to pad the line numbers with zeros, and -s\; tells nl to add a semicolon after the line number. (We have to escape the semicolon so that the shell ignores it.)
The remaining changes are handled by sed. The sed command s/55// removes the first occurrence of 55. The rearrangement of the date is handled by s/([0-9]{2})-([0-9]{2})-([0-9]{4})/\3\2\1/.
You could actually use a Bash loop to do this.
i=0
while read f1 f2; do
((++i))
IFS=\; read n d <<< $f1
d=${d:6:4}${d:3:2}${d:0:2}
printf "%010d;%d;%d %s\n" $i $n $d $f2
done < file.txt

Resources