put awk code in bash and sort the result - bash

I have a awk code for combining 2 files and add the result to the end of file.txt using ">>"
my code
NR==FNR && $2!=0 {two[$0]++;j=1; next }{for(i in two) {split(i,one,FS); if(one[3] == $NF){x=$4;sub( /[[:digit:]]/, "A", $4); print j++,$1,$2,$3,x,$4 | "column -t" ">>" "./Desktop/file.txt"}}}
i want put my awk to bash script and finaly sort my file.txt and save sorted result to file.txt again using >
i tried this
#!/bin/bash
command=$(awk '{NR==FNR && $2!=0 {two[$0]++;j=1; next }{for(i in two) {split(i,one,FS); if(one[3] == $NF){x=$4;sub( /[[:digit:]]/, "A", $4); print $1,$2,$3,$4 | "column -t" ">>" "./Desktop/file.txt"}}}}')
echo -e "$command" | column -t | sort -s -n -k4 > ./Desktop/file.txt
but it gives me error "for reading (no such a file or directory)"
where is my mistake?
Thanks in advance

1) you aren't specifying the input files for your awk script. This:
command=$(awk '{...stuff...}')
needs to be:
command=$(awk '{...stuff...}' file1 file2)
2) You move your awk condition "NR == ..." inside the action part so it will no longer behave as a condition.
3) Your awk script output is going into "file.txt" so "command" is empty when you echo it on the subsequent line.
4) You have unused variables x and j
5) You pass the arg FS to split() unnecessarily.
etc...
I THINK what you want is:
command=$( awk '
NR==FNR && $2!=0 { two[$0]++; next }
{
for(i in two) {
split(i,one)
if(one[3] == $NF) {
sub(/[[:digit:]]/, "A", $4)
print $1,$2,$3,$4
}
}
}
' file1 file2 )
echo -e "$command" | column -t >> "./Desktop/file.txt"
echo -e "$command" | column -t | sort -s -n -k4 >> ./Desktop/file.txt
but it's hard to tell.

Related

Splitting out a large file

I would like to process a 200 GB file with lines like the following:
...
{"captureTime": "1534303617.738","ua": "..."}
...
The objective is to split this file into multiple files grouped by hours.
Here is my basic script:
#!/bin/sh
echo "Splitting files"
echo "Total lines"
sed -n '$=' $1
echo "First Date"
head -n1 $1 | jq '.captureTime' | xargs -i date -d '#{}' '+%Y%m%d%H'
echo "Last Date"
tail -n1 $1 | jq '.captureTime' | xargs -i date -d '#{}' '+%Y%m%d%H'
while read p; do
date=$(echo "$p" | sed 's/{"captureTime": "//' | sed 's/","ua":.*//' | xargs -i date -d '#{}' '+%Y%m%d%H')
echo $p >> split.$date
done <$1
Some facts:
80 000 000 lines to process
jq doesn't work well since some JSON lines are invalid.
Could you help me to optimize this bash script?
Thank you
This awk solution might come to your rescue:
awk -F'"' '{file=strftime("%Y%m%d%H",$4); print >> file; close(file) }' $1
It essentially replaces your while-loop.
Furthermore, you can replace the complete script with:
# Start AWK file
BEGIN{ FS='"' }
(NR==1){tmin=tmax=$4}
($4 > tmax) { tmax = $4 }
($4 < tmin) { tmin = $4 }
{ file="split."strftime("%Y%m%d%H",$4); print >> file; close(file) }
END {
print "Total lines processed: ", NR
print "First date: "strftime("%Y%m%d%H",tmin)
print "Last date: "strftime("%Y%m%d%H",tmax)
}
Which you then can run as:
awk -f <awk_file.awk> <jq-file>
Note: the usage of strftime indicates that you need to use GNU awk.
you can start optimizing by changing this
sed 's/{"captureTime": "//' | sed 's/","ua":.*//'
with this
sed -nE 's/(\{"captureTime": ")([0-9\.]+)(.*)/\2/p'
-n suppress automatic printing of pattern space
-E use extended regular expressions in the script

Shell Script for combining 3 files

I have 3 files with below data
$cat File1.txt
Apple,May
Orange,June
Mango,July
$cat File2.txt
Apple,Jan
Grapes,June
$cat File3.txt
Apple,March
Mango,Feb
Banana,Dec
I require the below output file.
$Output_file.txt
Apple,May|Jan|March
Orange,June
Mango,July|Feb
Grapes,June
Banana,Dec
Requirement here is the take out the first column and then common data in column 1 in each file need to be searched and second column needs to be "|" separated. If there is no common column, then same needs to be printed in the output file.
I have tried putting this in a while loop, but it takes time as the file size increase. Wanted a simple solution using shell script.
This should work :
#!/bin/bash
for FRUIT in $( cat "$#" | cut -d "," -f 1 | sort | uniq )
do
echo -ne "${FRUIT},"
awk -F "," "\$1 == \"$FRUIT\" {printf(\"%s|\",\$2)}" "$#" | sed 's/.$/\'$'\n/'
done
Run it as :
$ ./script.sh File1.txt File2.txt File3.txt
A purely native-bash solution (calling no external tools, and thus limited only by the performance constraints of bash itself) might look like:
#!/usr/bin/env bash
case $BASH_VERSION in ''|[123].*) echo "ERROR: Bash 4 or newer required" >&2; exit 1;; esac
declare -A items=( )
for file in "$#"; do
while IFS=, read -r key value; do
items[$key]+="|$value"
done <"$file"
done
for key in "${!items[#]}"; do
value=${items[$key]}
printf '%s,%s\n' "$key" "${value#'|'}"
done
...called as ./yourscript File1.txt File2.txt File3.txt
This is fairly easy done with a single awk command:
awk 'BEGIN{FS=OFS=","} {a[$1] = a[$1] (a[$1] == "" ? "" : "|") $2}
END {for (i in a) print i, a[i]}' File{1,2,3}.txt
Orange,June
Banana,Dec
Apple,May|Jan|March
Grapes,June
Mango,July|Feb
If you want output in the same order as strings appear in original files then use this awk:
awk 'BEGIN{FS=OFS=","} !($1 in a) {b[++n] = $1}
{a[$1] = a[$1] (a[$1] == "" ? "" : "|") $2}
END {for (i=1; i<=n; i++) print b[i], a[b[i]]}' File{1,2,3}.txt
Apple,May|Jan|March
Orange,June
Mango,July|Feb
Grapes,June
Banana,Dec

remove delimiter if condition not satisfied and substitute a string on condition

Consider the below file.
HEAD~XXXX
XXX~XXX~XXX~XXX~XXX~XXX~~WIN~SCRIPT~~~
XXX~XXX~XXX~XXX~XXX~XXX~~WIN~TPSCRI~~~
XXX~XXX~XXX~XXX~XXX~XXX~~WIN~RSCPIT~~~
TAIL~20
wish the Output to be like below for the above:
HEAD~XXXX
XXX~XXX~XXX~XXX~XXX~XXX~~WIN~SCRIPT~~~
XXX~XXX~XXX~XXX~XXX~XXX~~~~~~
XXX~XXX~XXX~XXX~XXX~XXX~~~~~~
TAIL~20
If the 9th field is SCRIPT, I want both 8th & 9th fields to be empty like the 10th & if the line contains words HEAD/TAIL those have to ignored from our above condition, i.e., NF!=13 - will need the header & footer as it is in the input.
I have tried the below, but there should be a smarter way.
awk -F'~' -v OFS='~' '($9 != "Working line takeover with change of CP" {$9 = ""}) && ($9 != "Working line takeover with change of CP" {$8 = ""}) {NF=13; print}' file
the above doesn't work
head -1 file > head
tail -1 file > tail
sed -i '/HDR/d' file
sed -i '/TLR/d' file
sed -i '/^\s*$/d' file
awk -F'~' -v OFS='~' '$9 != "Working line takeover with change of CP" {$9,$8 = ""} {NF=13; print}' file >> file.tmp //syntax error
cat file.tmp >> head
cat tail >> head
echo "" >> head
mv head file1
I'm trying an UNIX shell script with the below requirements.
Consider a file like this..
XXX~XXX~XXX~XXX~XXX~XXX~~XXX~~SCRIPT~~~
XXX~XXX~XXX~XXX~XXX~XXX~~XXX~~OTHERS~~~~
XXX~XXX~XXX~XXX~XXX~XXX~~XXX~~OTHERS~~~
Each file should have 12 fields(~ as delimiter), if not a ~ has to removed.
If anything OTHER than SCRIPT string present in the 10th field, the field has to be removed.
I tried the below in /bin/bash, I know I'm not doing it so well. I'm feeding line to sed & awk commands.
while read readline
echo "entered while"
do
fieldcount=`echo $readline | awk -F '~' '{print NF}'`
echo "Field count printed"
if [ $fieldcount -eq 13 ] && [ $fieldcount -ne 12 ]
then
echo "entering IF & before deletion"
#remove delimiter at the end of line
#echo "$readline~" >> $S_DIR/$1.tmp
#sed -i '/^\s*$/d' $readline
sed -i s'/.$//' $readline
echo "after deletion"
if [ awk '/SCRIPT/' $readline -ne "SCRIPT"]
then
#sed -i 's/SCRIPT//' $readline
replace_what="OTHERS"
#awk -F '~' -v OFS=~ '{$'$replace_what'=''; print }'
sed -i 's/[^,]*//' $replace_what
echo "$readline" >> $S_DIR/$1.tmp
fi
else
echo "$readline" >> $S_DIR/$1.tmp
fi
done < $S_DIR/$1
awk -F'~' -v OFS='~' '$10 != "SCRIPT" {$10 = ""} {NF=12; print}' file
XXX~XXX~XXX~XXX~XXX~XXX~~XXX~~SCRIPT~~
XXX~XXX~XXX~XXX~XXX~XXX~~XXX~~~~
XXX~XXX~XXX~XXX~XXX~XXX~~XXX~~~~
In bash, I would write:
(
# execute in a subshell, so the IFS setting is localized
IFS='~'
while read -ra fields; do
[[ ${fields[9]} != "SCRIPT" ]] && fields[9]=''
echo "${fields[*]:0:12}"
done < file
)
Your followup question:
awk -F'~' -v OFS='~' '
$1 == "HEAD" || $1 == "TAIL" {print; next}
$9 != "SCRIPT" {$8 = $9 = ""}
{NF=13; print}
' file
If you have further questions, please create a new question instead of editing this one.

How can I specify a row in awk in for loop?

I'm using the following awk command:
my_command | awk -F "[[:space:]]{2,}+" 'NR>1 {print $2}' | egrep "^[[:alnum:]]"
which successfully returns my data like this:
fileName1
file Name 1
file Nameone
f i l e Name 1
So as you can see some file names have spaces. This is fine as I'm just trying to echo the file name (nothing special). The problem is calling that specific row within a loop. I'm trying to do it this way:
i=1
for num in $rows
do
fileName=$(my_command | awk -F "[[:space:]]{2,}+" 'NR==$i {print $2}' | egrep "^[[:alnum:]])"
echo "$num $fileName"
$((i++))
done
But my output is always null
I've also tried using awk -v record=$i and then printing $record but I get the below results.
f i l e Name 1
EDIT
Sorry for the confusion: rows is a variable that list ids like this 11 12 13
and each one of those ids ties to a file name. My command without doing any parsing looks like this:
id File Info OS
11 File Name1 OS1
12 Fi leNa me2 OS2
13 FileName 3 OS3
I can only use the id field to run a the command that I need, but I want to use the File Info field to notify the user of the actual File that the command is being executed against.
I think your $i does not expand as expected. You should quote your arguments this way:
fileName=$(my_command | awk -F "[[:space:]]{2,}+" "NR==$i {print \$2}" | egrep "^[[:alnum:]]")
And you forgot the other ).
EDIT
As an update to your requirement you could just pass the rows to a single awk command instead of a repeatitive one inside a loop:
#!/bin/bash
ROWS=(11 12)
function my_command {
# This function just emulates my_command and should be removed later.
echo " id File Info OS
11 File Name1 OS1
12 Fi leNa me2 OS2
13 FileName 3 OS3"
}
awk -- '
BEGIN {
input = ARGV[1]
while (getline line < input) {
sub(/^ +/, "", line)
split(line, a, / +/)
for (i = 2; i < ARGC; ++i) {
if (a[1] == ARGV[i]) {
printf "%s %s\n", a[1], a[2]
break
}
}
}
exit
}
' <(my_command) "${ROWS[#]}"
That awk command could be condensed to one line as:
awk -- 'BEGIN { input = ARGV[1]; while (getline line < input) { sub(/^ +/, "", line); split(line, a, / +/); for (i = 2; i < ARGC; ++i) { if (a[1] == ARGV[i]) {; printf "%s %s\n", a[1], a[2]; break; }; }; }; exit; }' <(my_command) "${ROWS[#]}"
Or better yet just use Bash instead as a whole:
#!/bin/bash
ROWS=(11 12)
while IFS=$' ' read -r LINE; do
IFS='|' read -ra FIELDS <<< "${LINE// +( )/|}"
for R in "${ROWS[#]}"; do
if [[ ${FIELDS[0]} == "$R" ]]; then
echo "${R} ${FIELDS[1]}"
break
fi
done
done < <(my_command)
It should give an output like:
11 File Name1
12 Fi leNa me2
Shell variables aren't expanded inside single-quoted strings. Use the -v option to set an awk variable to the shell variable:
fileName=$(my_command | awk -v i=$i -F "[[:space:]]{2,}+" 'NR==i {print $2}' | egrep "^[[:alnum:]])"
This method avoids having to escape all the $ characters in the awk script, as required in konsolebox's answer.
As you already heard, you need to populate an awk variable from your shell variable to be able to use the desired value within the awk script so thi:
awk -F "[[:space:]]{2,}+" 'NR==$i {print $2}' | egrep "^[[:alnum:]]"
should be this:
awk -v i="$i" -F "[[:space:]]{2,}+" 'NR==i {print $2}' | egrep "^[[:alnum:]]"
Also, though, you don't need awk AND grep since awk can do anything grep van do so you can change this part of your script:
awk -v i="$i" -F "[[:space:]]{2,}+" 'NR==i {print $2}' | egrep "^[[:alnum:]]"
to this:
awk -v i="$i" -F "[[:space:]]{2,}+" '(NR==i) && ($2~/^[[:alnum:]]/){print $2}'
and you don't need a + after a numeric range so you can change {2,}+ to just {2,}:
awk -v i="$i" -F "[[:space:]]{2,}" '(NR==i) && ($2~/^[[:alnum:]]/){print $2}'
Most importantly, though, instead of invoking awk once for every invocation of my_command, you can just invoke it once for all of them, i.e. instead of this (assuming this does what you want):
i=1
for num in rows
do
fileName=$(my_command | awk -v i="$i" -F "[[:space:]]{2,}" '(NR==i) && ($2~/^[[:alnum:]]/){print $2}')
echo "$num $fileName"
$((i++))
done
you can do something more like this:
for num in rows
do
my_command
done |
awk -F '[[:space:]]{2,}' '$2~/^[[:alnum:]]/{print NR, $2}'
I say "something like" because you don't tell us what "my_command", "rows" or "num" are so I can't be precise but hopefully you see the pattern. If you give us more info we can provide a better answer.
It's pretty inefficient to rerun my_command (and awk) every time through the loop just to extract one line from its output. Especially when all you're doing is printing out part of each line in order. (I'm assuming that my_command really is exactly the same command and produces the same output every time through your loop.)
If that's the case, this one-liner should do the trick:
paste -d' ' <(printf '%s\n' $rows) <(my_command |
awk -F '[[:space:]]{2,}+' '($2 ~ /^[::alnum::]/) {print $2}')

How can I print the duplicates in a file only once?

I have an input file that contains:
123,apple,orange
123,pineapple,strawberry
543,grapes,orange
790,strawberry,apple
870,peach,grape
543,almond,tomato
123,orange,apple
i want the output to be:
The following numbers are repeated:
123
543
is there a way to get this output using awk; i'm writing the script in solaris , bash
sed -e 's/,/ , /g' <filename> | awk '{print $1}' | sort | uniq -d
awk -vFS=',' \
'{KEY=$1;if (KEY in KEYS) { DUPS[KEY]; }; KEYS[KEY]; } \
END{print "Repeated Keys:"; for (i in DUPS){print i} }' \
< yourfile
There are solutions with sort/uniq/cut as well (see above).
If you can live without awk, you can use this to get the repeating numbers:
cut -d, -f 1 my_file.txt | sort | uniq -d
Prints
123
543
Edit: (in response to your comment)
You can buffer the output and decide if you want to continue. For example:
out=$(cut -d, -f 1 a.txt | sort | uniq -d | tr '\n' ' ')
if [[ -n $out ]] ; then
echo "The following numbers are repeated: $out"
exit
fi
# continue...
This script will print only the number of the first column that are repeated more than once:
awk -F, '{a[$1]++}END{printf "The following numbers are repeated: ";for (i in a) if (a[i]>1) printf "%s ",i; print ""}' file
Or in a bit shorter form:
awk -F, 'BEGIN{printf "Repeated "}(a[$1]++ == 1){printf "%s ", $1}END{print ""} ' file
If you want to exit your script in case a dup is found, then you can exit a non-zero exit code. For example:
awk -F, 'a[$1]++==1{dup=1}END{if (dup) {printf "The following numbers are repeated: ";for (i in a) if (a[i]>1) printf "%s ",i; print "";exit(1)}}' file
In your main script you can do:
awk -F, 'a[$1]++==1{dup=1}END{if (dup) {printf "The following numbers are repeated: ";for (i in a) if (a[i]>1) printf "%s ",i; print "";exit(-1)}}' file || exit -1
Or in a more readable format:
awk -F, '
a[$1]++==1{
dup=1
}
END{
if (dup) {
printf "The following numbers are repeated: ";
for (i in a)
if (a[i]>1)
printf "%s ",i;
print "";
exit(-1)
}
}
' file || exit -1

Resources