Using sed to dynamically generate a file name - shell

I have a CSV file that I'd like to split up based on a field in the file. Essentially, there can be two brands, GVA and HBVL. I'd like to split the file into a file for each brand before I import it into a database.
Sample of the CSV file
"D509379D5055821451C3695A3752DCCD",'1900-01-01 01:00:00',"M","1740","GVA",'2009-07-01 13:25:00',0
"159A58BE41012787D531C7157F688D86",'1900-01-01 00:00:00',"V","1880","GVA",'2008-06-06 11:21:00',0
"D0BB5C058794BBE4478DDA536D1E4872",'1900-01-01 00:00:00',"M","9270","GVA",'2007-09-18 13:21:00',0
"BCC7096803E5E60E05DC12FB9951E0CF",'1900-01-01 00:00:00',"M","3500","HBVL",'2007-09-18 13:21:00',1
"7F85FCE6F13775A8A3054E3438B81599",'1900-01-01 00:00:00',"M","3970","HBVL",'2007-09-18 13:20:00',0
Part of the problem is the size of the file. It's about 39mb. My original attempt at this looked like this:
while read line ; do
name=`echo $line | sed -n 's/\(.*\)"\(GVA\|HBVL\)",\(.*\)$/\2/ p' | tr [:upper:] [:lower:] `
info=`echo $line | sed -n 's/\(.*\)"\(GVA\|HBVL\)",\(.*\)$/\1\3/ p'`
echo "${info}" >> ${BASEDIR}/${today}/${name}.txt
done < ${file}
After about 2.5 hours, only about 1/2 of the file had been processed. I have another file that could potentially be up to 250 mb in size and I can't imagine how long that would take.
What I'd like to do is pull out the brand out of the line and write the line to a file named after the brand. I can remove the brand, but I don't now how to use it to create a file. I've started in sed, but I'm not above using another language if it's more appropriate.

The original while loop with multiple commands per line is DIRE!
sed -e '/"GVA"/w gva.file' -e '/"HBVL"/w hbvl.file' -n $file
The sed script says:
write lines that match the GVA tag to gva.file
write lines that match the HBVL tag to hbvl.file
and don't print anything else ('-n')
Note that different versions of sed can handle different numbers of auxilliary files. If you need more than, say, twenty output files at once, you may need to look at other technology (but test what the limit is on your machine). If the file is sorted so that all the GVA records appear together followed by all the HBVL records, you could consider using csplit. Alternatively, a scripting language like Perl could handle more. If you exceed the number of file descriptors allowed to your process, it becomes hard to do the splitting in a single pass over the data file.

grep '"GVA"' $file >GVA.txt
grep '"HVBL"' $file >HVBL.txt

# awk -F"," '{o=$5;gsub(/\"/,"",o);print $0 > o}' OFS="," file
# more GVA
"D509379D5055821451C3695A3752DCCD",'1900-01-01 01:00:00',"M","1740","GVA",'2009-07-01 13:25:00',0
"159A58BE41012787D531C7157F688D86",'1900-01-01 00:00:00',"V","1880","GVA",'2008-06-06 11:21:00',0
"D0BB5C058794BBE4478DDA536D1E4872",'1900-01-01 00:00:00',"M","9270","GVA",'2007-09-18 13:21:00',0
# more HBVL
"BCC7096803E5E60E05DC12FB9951E0CF",'1900-01-01 00:00:00',"M","3500","HBVL",'2007-09-18 13:21:00',1
"7F85FCE6F13775A8A3054E3438B81599",'1900-01-01 00:00:00',"M","3970","HBVL",'2007-09-18 13:20:00',0

Related

grep of 50000 strings in a big file performance improvement

I have a file, which is about 200 MB of size, with about 1.2 M lines in it. Let's say it as reading.txt. I have another file, input.txt,
in which there are about 50000 lines. I want to take a string in each line from input.txt file and grep in reading.txt. For a matched line,
in reading.txt get that complete line and write into other file, output.txt.
As of now, I am looping through every string of input.txt file, grep in reading.txt file. This approach is consuming more than 1 hour time.
Is there any option to increase performance so that time consumption reduces for this process.
while read line
do
LC_ALL=C grep ${line} reading.txt 2>/dev/null
done<input.txt >> output.txt
man grep yields (among others):
-f FILE, --file=FILE
Obtain patterns from FILE, one per line. If this option is used
multiple times or is combined with the -e (--regexp) option,
search for all patterns given. The empty file contains zero
patterns, and therefore matches nothing.
grep -f input.txt reading.txt > output.txt
...will print all lines in 'reading.txt', with a sub string matching a line in 'input.txt', in the order of 'reading.txt', to 'output.txt'
You don't specify this, but it may be relevant (you said 1.2MB lines in 'reading.txt') - a separate output file for every matching line:
#!/bin/sh
nl='
'
IFS=$nl
c=0
for i in $(grep -f input.txt reading.txt); do
c=$((c+1))
echo "$i" > output$c.txt
done
There are tidier methods of setting IFS to a new line, for example in bash: IFS=$'\n' (also you can use > output$((++c)).txt in bash)

How to add a header to text file in bash?

I have a text file and want to convert it to csv file before to convert it, i want to add a header to text file so that the csv file has the same header. I have one thousand columns in text file and want to have one thousand column name. As a side note, the content of the text file is just rows of some numbers which is separated by comma ",". Is there any way to add the header line in bash?
I tried the way below and didn't work. I did the command below first in python.
> for i in range(1001):
> print "col" + "_" + "i"
save the output of this in text file with this command (python header.py >> header.txt) and add the output of this in format of text file to the original text file that i have like below:
cat header.txt filename.txt > newfilename.txt
then convert the txt file to csv file with "mv newfilename.txt newfilename.csv".
But unfortunately this way doesn't work as the header line has double number of other rows for some reason. I would appreciate any help to make this problem solve.
based on the description your file is already comma separated, so is a csv file. You just want to add a column number header line.
$ awk -F, 'NR==1{for(i=1;i<=NF;i++) printf "col_%d%s", $i,(i==NF?ORS:FS)}1' file
will add column headers as many as the fields in the first row of the file
e.g.
$ seq 5 | paste -sd, | # create 1,2,3,4,5 as a test input
awk -F, 'NR==1{for(i=1;i<=NF;i++) printf "col_%d%s", i, (i==NF?ORS:FS)}1'
col_1,col_2,col_3,col_4,col_5
1,2,3,4,5
You can generate the column names in bash using one of the options below. Each example generates a header.txt file. You already have code to add this to the beginning of your file as a header.
Using bash loops
Bash loops for this many iterations will be inefficient, but will work.
for i in {1..10}; do
echo -n "col_$i "
done > header.txt
echo >> header.txt
or using seq
for i in $(seq 1 1000); do
echo -n "col_$i "
done > header.txt
echo >> header.txt
Using seq only
Using seq alone will be more efficient.
seq -f "col_%g" -s" " 1 1000 > header.txt
Use seq and sed
You can use the seq utility to construct your CSV header, with a little minor help from Bash expansions. You can then insert the new header row into your existing CSV file, or concatenate the header with your data.
For example:
# construct a quoted CSV header
columns=$(seq -f '"col_%g"' -s', ' 1 1001)
# strip the trailing comma
columns="${columns%,*}"
# insert headers as first line of foo.csv with GNU sed
sed -i -e "1 i\\${columns}" /tmp/foo.csv
Caveats
If you don't have GNU sed, you can also use cat, sponge, or other tools to concatenate your header and data, although most of your concatenation options will require redirection to a new combined file to avoid clobbering your existing data.
For example, given /tmp/data.csv as your original data file:
seq -f '"col_%g"' -s', ' 1 1001 > /tmp/header.csv
sed -i -e 's/,[[:space:]]*$//' /tmp/header.csv
cat /tmp/header /tmp/data > /tmp/new_file.csv
Also, note that while Bash solutions that avoid calling standard utilities are possible, doing it in pure Bash might be too slow or memory intensive for large data sets.
Your mileage may vary.
printf "col%s," {1..100} |
sed 's/,$//' |
cat - filename.txt >newfilename.txt
I believe sed should supply the missing final newline as a side effect. If not, maybe try 's/,$/\n/' though this isn't entirely portable, either. You could probably replace the cat with sed as well, something like
... | sed 's/,$//;r filename.txt'
but again, I'm not entirely sure how portable this is.

Bash - read specific line from a file with all sorts of data and store as a variable

I have looked for an answer to what seems like a simple question, but I feel as though all these questions (below) only briefly touch on the matter and/or over-complicate the solution.
Read a file and split each line into two variables with bash program
Bash read from file and store to variables
Need to assign the contents of a text file to a variable in a bash script
What I want to do is read specific lines from a file (titled 'input'), store them variables and then use them.
For example, in this code, every 9th line after a certain point contains a filename that I want to store as a variable for later use. How can I do that?
steps=49
for((i=1;i<=${steps};i++)); do
...
g=$((9 * $i + 28)) #In.omega filename
`
For the bigger picture, I basically need to print a specific line (line 9) from the file whose name is specified in the gth line of the file named "input"
sed '1,39p;d' data > temp
sed "9,9p;d" [filename specified in line g of input] >> temp
sed '41,$p;d' data >> temp
mv temp data
Say you want to assign the 49th line of the $FILE file to the $ARG variable, you can do:
$ ARG=`cat $FILE | head -49 | tail -1`
To get line 9 of the file named in the gth line of the file named input:
sed -n 9p "$(sed -n ${g}p input)"
arg=$(cat sample.txt | sed -n '2p')
where arg is variable and sample.txt is file and 2 is line number

How do I delete all rows with a blank space in the third column within a file?

So, I have a file which contains the results of some calculations I've run in the past weeks. I've collected the results in a file which I intend to plot. It is basically a bunch of rows with the format "x" "y" "f(x,y)", like this:
1.7 4.7 -460.5338556921
1.7 4.9 -460.5368762353
1.7 5.5
However, some lines, exemplified by the last one, contain a blank space in the 3rd column, resulting from failed calculations. I'd still like to plot the viable points, but, as there are thousands of points (and therefore rows) that task just be accomplished easily by hand. I'd like to know how to make a script or program (I'd prefer a shell script, but I'll gladly go along with whatever works), which identifies those lines and deletes them. Does anyone know a way to do it?
awk '$3' <filename>
or better
awk 'NF > 2' <filename> # if in any entry in the column-3 happens to be zero
This will do the purpose!
The simplest form of grep command that should probably be understood by any shell these days:
grep -v '^[^[:space:]]*[[:space:]]*[^[:space:]]*[[:space:]]*$' <filename>
With grep:
grep ' .* [^ ]' file
or using ERE:
grep -E '\s\S+\s\S' file
I would to use:
perl -lanE 'print if #F==3 && /^[\d\s\.+-]+$/' file
will print only lines:
which contains 3 fields
and contains only numbers, spaces, and .+-
I do not know how you are going to plot. You would like a grep or awk solution and pipe all valid lines into your plotting application.
When you need to call a program for each set of values, you can skip the invalid lines when you are reading the values:
while read -r x y fxy; do
if [ -n "${fxy}" ]; then
myplotter "$x" "$y" "${fxy}"
fi
done < file

Iterating grep over and over. How can I make my script faster?

I have to find a string of numbers from one file in another file.
My code is this:
#!/bin/sh
IFS="F"
while read f1 f2
do
LC_ALL=C fgrep -m 1 "$f1" BC_Tel.inp
done < telephonelist.txt
The string of numbers are located in telephonelist.txt. The format of this text file is as follows:
8901040000001304669F 370040000130466
8901040000001317380F 370040000131738
8901040000001330045F 370040000133004
8901040000001330052F 370040000133005
8901040000001330060F 370040000133006
I'm looking for the lines with the above numbers delimited by 'F' in BC_Tel.inp, which has the following format:
981040000030289765F1 655F370D1E86260ED550A2D6F80EFF96 01000045384136453332440303FFFFFFFFFFFFFFFF0000 01000037333643383234380303FFFFFFFFFFFFFFFF0000 083907400030289765 00000031323334FFFFFFFF030334303733323638310AFF 01000034383532FFFFFFFF030334333738333137320AFF 0020 01007F107FD2266C31249530FC531B474F6D44482C007F007F007F007F007F007F007F007F007F007F007F007F007F007F007F107F97AB34277D5378AEC893716281F99ABC007F007F007F007F007F007F007F007F007F007F007F007F007F007F007F107F6608B51E4378BE23072E843D6741A184007F007F007F007F007F007F007F007F007F007F007F007F007F007F 636C8D46973FAE4C1BD181BB4E0D4DA2A5E0455E86406CCF40F309F63470CE07 000003817826FF0187494010083A65626501586519104106 083907400030289765636C8D46973FAE4C1BD181BB4E0D4DA2 080900000000101003636C8D46973FAE4C1BD181BB4E0D4DA2 8901040000038279561 40732681
telephonelist.txt and BC_Tel.inp are huge files with over a million lines. The script works fine but I want to make it faster. I'm basically running over the txt file once, but I'm greping over and over the .inp file. How do I go about making this process faster?
tl;dr
I want to optimize my code so it runs faster.
A single grep will do it:
cut -d"F" -f1 telephonelist.txt | grep -F -m1 -f- BC_Tel.inp
the -f option to grep provides a filename containing the patterns. Here, we're using the filename - to indicate "stdin".

Resources