Split a .txt file based on content - bash

I have a huge *.txt file as follows:
~~~~~~~~ small file content 1 <br>
~~~~~~~~ small file content 2 <br>
...
~~~~~~~~ small file content n <br>
How do I split this into n files, preferably via bash?

Use csplit
$ csplit --help
Usage: csplit [OPTION]... FILE PATTERN...
Output pieces of FILE separated by PATTERN(s) to files `xx00', `xx01', ...,
and output byte counts of each piece to standard output.

With awk:
awk 'BEGIN {c=1} NR % 10000 == 0 { c++ } { print $0 > ("splitfile_" c) }' LARGEFILE
will do. It sets up a counter which will be incremented on every 10000 line. Then writes the lines to ˙splitfile_` file(s).
HTH

If the content of your HUGE text file is on every line (i.e each line contains content that you would like to split then this should work) -
One-liner:
awk '{print >("SMALL_BATCH_OF_FILES_" NR)}' BIG_FILE
Test:
[jaypal:~/Temp] cat BIG_FILE
~~~~~~~~ small file content 1
~~~~~~~~ small file content 2
~~~~~~~~ small file content 3
~~~~~~~~ small file content 4
~~~~~~~~ small file content n-1
~~~~~~~~ small file content n
[jaypal:~/Temp] awk '{print >("SMALL_BATCH_OF_FILES_" NR)}' BIG_FILE
[jaypal:~/Temp] ls -lrt SMALL_BATCH_OF_FILES_*
-rw-r--r-- 1 jaypalsingh staff 30 17 Dec 14:19 SMALL_BATCH_OF_FILES_6
-rw-r--r-- 1 jaypalsingh staff 32 17 Dec 14:19 SMALL_BATCH_OF_FILES_5
-rw-r--r-- 1 jaypalsingh staff 30 17 Dec 14:19 SMALL_BATCH_OF_FILES_4
-rw-r--r-- 1 jaypalsingh staff 30 17 Dec 14:19 SMALL_BATCH_OF_FILES_3
-rw-r--r-- 1 jaypalsingh staff 30 17 Dec 14:19 SMALL_BATCH_OF_FILES_2
-rw-r--r-- 1 jaypalsingh staff 30 17 Dec 14:19 SMALL_BATCH_OF_FILES_1
[jaypal:~/Temp] cat SMALL_BATCH_OF_FILES_1
~~~~~~~~ small file content 1
[jaypal:~/Temp] cat SMALL_BATCH_OF_FILES_2
~~~~~~~~ small file content 2
[jaypal:~/Temp] cat SMALL_BATCH_OF_FILES_6
~~~~~~~~ small file content n

Related

Does anybody have a script that counts the number of consecutive files which contain a specific word?

Any resources or advice would help, since I am pretty rubbish at scripting
So, I need to go to this path: /home/client/data/storage/customer/data/2020/09/15
And check to see if there are 5 or more consecutive files that contain the word "REJECTED":
ls -ltr
-rw-rw-r-- 1 root root 5059 Sep 15 00:05 customer_rlt_20200915000514737_20200915000547948_8206b49d-b585-4360-8da0-e90b8081a399.zip
-rw-rw-r-- 1 root root 5023 Sep 15 00:06 customer_rlt_20200915000547619_20200915000635576_900b44dc-1cf4-4b1b-a04f-0fd963591e5f.zip
-rw-rw-r-- 1 root root 39856 Sep 15 00:09 customer_rlt_20200915000824108_20200915000908982_b87b01b3-a5dc-4a80-b19d-14f31ff667bc.zip
-rw-rw-r-- 1 root root 39719 Sep 15 00:09 customer_rlt_20200915000901688_20200915000938206_38261b59-8ebc-4f9f-9e2d-3e32eca3fd4d.zip
-rw-rw-r-- 1 root root 12829 Sep 15 00:13 customer_rlt_20200915001229811_20200915001334327_1667be2f-f1a7-41ae-b9ca-e7103d9abbf8.zip
-rw-rw-r-- 1 root root 12706 Sep 15 00:13 customer_rlt_20200915001333922_20200915001357405_609195c9-f23a-4984-936f-1a0903a35c07.zip
Example of rejected file:
customer_rlt_20200513202515792_20200513202705506_5b8deae0-0405-413c-9a81-d1cc2171fa51REJECTED.zip
What I have so far:
!/bin/bash
YYYY=$(date +%Y);
MM=$(date +%m)
DD=$(date +%d)
#Set constants
CODE_OK=0
CODE_WARN=1
CODE_CRITICAL=2
CODE_UNKNOWN=3
#Set Default Values
FILE="/home/client/data/storage/customer/data/${YYYY}/${MM}/{DD}"
if [ ! -f $FILE ]
then
echo "NO TRANSACTIONS FOUND"
exit $CODE_CRITICAL
fi
You can do something quick in AWK:
$ cat consec.awk
/REJECTED/ {
if (match_line == NR - 1) {
consecutives++
} else {
consecutives = 1
}
if (consecutives == 5) {
print "5 REJECTED"
exit
}
match_line = NR
}
$ touch 1 2REJECTED 3REJECTED 5REJECTED 6REJECTED 7REJECTED 8
$ ls -1 | awk -f consec.awk
5 REJECTED
$ rm 3REJECTED; touch 3
$ ls -1 | awk -f consec.awk
$
This works by matching line containing REJECTED, counting consecutive lines (checked with match_line == NR - 1, which means "the last matching line was the previous line") and printing "5 REJECTED" if the number of consecutive lines is 5.
I've used ls -1 (note digit 1, not letter l) to sort by filename in this example. You could use ls -1rt (digit 1 again) to sort by file modification time, as in your original post.

comparing two files with common file name but want output from the both if matched, how using awk?

I want to compare filenames of Today.txt with Main.txt.
If there is match, then print all 6 columns of matched file of Main.txt and Time column of Today.txt with a new file say matched.txt.
and the files which are not matched with Main.txt, then list the filename and time of TODAY.txt in a new file say unmatched.txt
NOTE: Plus sign(+) indicates files are from inprogress directory,sometimes filenames are appended with "+".
Main.txt
date filename timestamp space count status
Nov 4 +CHCK01_20161104.txt 06:39 2.15M 17153 on_time
Nov 4 TRIPS11_20161104.txt 09:03 0.00M 24 On_Time
Nov 4 AR02_20161104.txt 09:31 0.00M 7 On_Time
Nov 4 AR01_20161104.txt 09:31 0.04M 433 On_Time
Today.txt
filename time
CHCK01_20161104.txt 06:03
CHCK05_20161104.txt 11:10
CHCK09_20161104.txt 21:46
AR01_20161104.txt 09:36
AR02_20161104.txt 09:36
ifs01_20161104.txt 21:16
TRIPS11_20161104.txt 09:16
Required Output:
matched.txt
Nov 4 +CHCK01_20161104.txt 06:39 06:03 2.15M 17153 on_time
Nov 4 TRIPS11_20161104.txt 09:03 09:16 0.00M 24 On_Time
Nov 4 AR02_20161104.txt 09:31 09:36 0.00M 7 On_Time
Nov 4 AR01_20161104.txt 09:31 09:36 0.04M 433 On_Time
unmatched.txt
CHCK05_20161104.txt 11:10
CHCK09_20161104.txt 21:46
ifs01_20161104.txt 21:16
Below command gives me proper output except the 2nd column output of Today.txt, kindly help me on this please ?
awk 'FNR==1{next}
NR==FNR{a[$1]=$2; next}
{k=$3; sub(/^\+/,"",k)} k in a{print; delete a[k]}
END{for(k in a) print k,a[k] > "unmatched.txt"}' Today.txt Main.txt > matched.txt
Thanks a lot in advance !
Assuming that the filenames in today.txt occur only once in the main.tx, you can write an awk script as
$ cat prog.awk
NR==FNR && NR>1{file[$1]=$2; next}
$3 in file{$4 = $4" "file[$3]; print > "matched.txt"; delete file[$3]}
END{for (filename in file) print filename, file[filename] > "unmatched.txt"}
$ awk -F"[ +]*" -f prog.awk today main
-F"[ +]*" Set the field separator as space or +. This will be use full to remove the + from the filenames.

awk: Group by and then Sort by sub strings of a string

Assuming we have following files:
-rw-r--r-- 1 user group 120 Aug 17 18:27 A.txt
-rw-r--r-- 1 user group 155 May 12 12:28 A.txt
-rw-r--r-- 1 user group 155 May 10 21:14 A.txt
-rw-rw-rw- 1 user group 700 Aug 15 17:05 B.txt
-rw-rw-rw- 1 user group 59 Aug 15 10:02 B.txt
-rw-r--r-- 1 user group 180 Aug 15 09:38 B.txt
-rw-r--r-- 1 user group 200 Jul 2 17:09 C.txt
-rw-r--r-- 1 user group 4059 Aug 9 13:58 D.txt
Considering only HH:MM in timestamp (ie ignoring date/day part of timestamp), I want to sort this listing to pick maximum and minimum timestamp for each file name.
So we want to group by last column and get min & max HH:MM.
Please assume that filename duplicates are allowed in my input data.
In awk code, I particularly got stuck to group by and then sort by HH first and then MM.
Output we are expecting is in format:
Filename | Min HHMM | Max HHMM
A.txt 12:28 21:14
C.txt 17:09 17:09
..
(or any other output format giving this details is good)
Can you please help..TIA
Try:
awk '{if ($8<min[$9] || !min[$9])min[$9]=$8; if ($8>max[$9])max[$9]=$8} END{for (f in min)print f,min[f],max[f]}' file | sort
Example
$ cat file
-rw-r--r-- 1 user group 120 Aug 17 18:27 A.txt
-rw-r--r-- 1 user group 155 May 12 12:28 A.txt
-rw-r--r-- 1 user group 155 May 10 21:14 A.txt
-rw-rw-rw- 1 user group 700 Aug 15 17:05 B.txt
-rw-rw-rw- 1 user group 59 Aug 15 10:02 B.txt
-rw-r--r-- 1 user group 180 Aug 15 09:38 B.txt
-rw-r--r-- 1 user group 200 Jul 2 17:09 C.txt
-rw-r--r-- 1 user group 4059 Aug 9 13:58 D.txt
$ awk '{if ($8<min[$9] || !min[$9])min[$9]=$8; if ($8>max[$9])max[$9]=$8} END{for (f in min)print f,min[f],max[f]}' file | sort
A.txt 12:28 21:14
B.txt 09:38 17:05
C.txt 17:09 17:09
D.txt 13:58 13:58
Warning
Your input looks like it was produced by ls. If that is so, be aware that the output of ls has a myriad of peculiarities and compatibility issues. The authors of ls recommend against parsing the output of ls.
How the code works
awk implicitly loops over every line of input. This code uses two associative arrays. min keeps track of the minimum time for each file name. max keeps track of the maximum.
if ($8<min[$9] || !min[$9])min[$9]=$8
This updates min if the time, $8, in the time for the current line is less than the previously seen time for this filename, $9.
if ($8>max[$9])max[$9]=$8
This updates max if the time, $8, in the time for the current line is greater than the previously seen time for this filename, $9.
END{for (f in min)print f,min[f],max[f]}
This prints out the results for each file name.
sort
This sorts the output into a cosmetically pleasing form.
similar awk
$ awk '{k=$9;v=$8} # set key (k), value (v)
!(k in min){min[k]=max[k]=v} # initial value for min/max
min[k]>v{min[k]=v} # set min
max[k]<v{max[k]=v} # set max
END{print "Filename | Min HHMM | Max HHMM";
for(k in min) print k,min[k],max[k] | "sort"}' file
Filename | Min HHMM | Max HHMM
A.txt 12:28 21:14
B.txt 09:38 17:05
C.txt 17:09 17:09
D.txt 13:58 13:58
note that printing header and piping data to sort in awk keeps the header in the first line.
$ cat > test.awk
BEGIN {
min["\x00""Filename"]="Min_HHMM"OFS"Max_HHMM" # set header in min[], preceded by NUL
} # to place on top when ordering (HACK)
!($9 in min)||min[$9]>$8 { # if candidate smaller than current min
min[$9]=$8 # set new min
}
max[$9]<$8 {
max[$9]=$8 # set new max
}
END {
PROCINFO["sorted_in"]="#ind_str_asc" # set array scanning order for for loop
for(i in min)
print i,min[i],max[i]
}
$ awk -f test.awk file
Filename Min_HHMM Max_HHMM
A.txt 12:28 21:14
B.txt 09:38 17:05
C.txt 17:09 17:09
D.txt 13:58 13:58
The BEGIN hack can be replaced by a static print in the beginning of END block:
print "Filename"OFS"Min_HHMM"OFS"Max_HHMM";

How to extract date from filename with extenstion using shell script

I tried to extract date from filenames for first two rows only with extension .log
ex: filenames are as follows
my_logFile.txt contains
abc20140916_1.log
abhgg20140914_1.log
abf20140910_1.log
log.abc_abc20140909_1
The code I tried:
awk '{print substr($1,length($1)-3,4)}' my_logFile.txt
But getting op as:
.log
.log
.log
Need op as:
20140916
20140914
*****revised query*
I have a txt file containing n number of log files. Each line in txt file is like this.
-rw-rw-rw- 1 abchost abchost 241315175 Apr 16 10:45 abc20140405_1.log
-rw-rw-rw- 1 abchost abchost 241315175 Apr 16 10:45 aghtff20140404_1.log
-rw-rw-rw- 1 abchost abchost 241315175 Apr 16 10:45 log.pqrs20140403_1
I need to extract date out of file names from only first two rows. Here the filename has varying number of char before date.
The op should beL
20140405
20140404
Will this work to you?
$ head -2 file | grep -Po ' [a-z]+\K[0-9]+(?=.*\.log$)'
20140405
20140404
Explanation
head -2 file gets the first two lines of the file.
grep -Po ' [a-z]+\K[0-9]+(?=.*\.log$)' gets the set of digits in between a block of (space + a-z letters) and (.log + end of line).
try this,
cut -f9 -d " " <file> | grep -o -E "[0-9]{8}"
worked on my machine,
[root#giam20 ~]# cat sample.txt
-rw-rw-rw- 1 abchost abchost 241315175 Apr 16 10:45 abc20140405_1.log
-rw-rw-rw- 1 abchost abchost 241315175 Apr 16 10:45 aghtff20140404_1.log
-rw-rw-rw- 1 abchost abchost 241315175 Apr 16 10:45 log.pqrs20140403_1
[root#giam20 ~]# cut -f9 -d " " sample.txt | grep -o -E "[0-9]{8}"
20140405
20140404
20140403

Grep to multiple output files

I have one huge file (over 6GB) and about 1000 patterns. I want extract lines matching each of the pattern to separate file. For example my patterns are:
1
2
my file:
a|1
b|2
c|3
d|123
As a output I would like to have 2 files:
1:
a|1
d|123
2:
b|2
d|123
I can do it by greping file multiple times, but it is inefficient for 1000 patterns and huge file. I also tried something like this:
grep -f pattern_file huge_file
but it will make only 1 output file. I can't sort my huge file - it takes to much time. Maybe AWK will make it?
awk -F\| 'NR == FNR {
patt[$0]; next
}
{
for (p in patt)
if ($2 ~ p) print > p
}' patterns huge_file
With some awk implementations you may hit the max number of open files limit.
Let me know if that's the case so I can post an alternative solution.
P.S.: This version will keep only one file open at a time:
awk -F\| 'NR == FNR {
patt[$0]; next
}
{
for (p in patt) {
if ($2 ~ p) print >> p
close(p)
}
}' patterns huge_file
You can accomplish this (if I understand the problem) using bash "process substitution", e.g., consider the following sample data:
$ cal -h
September 2013
Su Mo Tu We Th Fr Sa
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30
Then selective lines can be grepd to different output files in a single command as:
$ cal -h \
| tee >( egrep '1' > f1.txt ) \
| tee >( egrep '2' > f2.txt ) \
| tee >( egrep 'Sept' > f3.txt )
In this case, each grep is processing the entire data stream (which may or may not be what you want: this may not save a lot of time vs. just running concurrent grep processes):
$ more f?.txt
::::::::::::::
f1.txt
::::::::::::::
September 2013
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
::::::::::::::
f2.txt
::::::::::::::
September 2013
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30
::::::::::::::
f3.txt
::::::::::::::
September 2013
This might work for you (although sed might not be the quickest tool!):
sed 's,.*,/&/w &_file,' pattern_file > sed_file
Then run this file against the source:
sed -nf sed_file huge_file
I did a cursory test and the GNU sed version 4.1.5 I was using, easily opened 1000 files OK, however your unix system may well have smaller limits.
Grep cannot output matches of different patterns to different files. Tee is able to redirect it's input into multiple destinations, but i don't think this is what you want.
Either use multiple grep commands or write a program to do it in Python or whatever else language you fancy.
I had this need, so I added the capability to my own copy of grep.c that I happened to have lying around. But it just occurred to me: if the primary goal is to avoid multiple passes over a huge input, you could run egrep once on the huge input to search for any of your patterns (which, I know, is not what you want), and redirect its output to an intermediate file, then make multiple passes over that intermediate file, once per individual pattern, redirecting to a different final output file each time.

Resources