I have three functions that digest an access.log file on my server.
hitsbyip() {
cat $ACCESSLOG | awk '{ print $1 }' | uniq -c | sort -nk1 | uniq
}
hitsbyhour() {
cat $ACCESSLOG | cut -d[ -f2 | cut -d] -f1 | awk -F: '{print $2":00"}' | sort -n | uniq -c
}
hitsbymin() {
hr=$1
grep "2015:${hr}" $ACCESSLOG | cut -d[ -f2 | cut -d] -f1 | awk -F: '{print $2":"$3}' | sort -nk1 -nk2 | uniq -c
}
They all work fine when used on their own. All three output 2 small colums of data.
Now I am looking to create another function called report which will simply use printf and its formatting possibilities to print 3 columns of data with header, each of them the result of my three first functions. Something like that:
report() {
printf "%-30b %-30b %-30b\n" `hitsbyip` `hitsbyhour` `hitsbymin 10`
}
The thing is that the format is not what i want; it prints out the columns horizontaly instead of side by side.
Any help would be greatly appreciated.
Once you use paste to combine the output of the three commands into a single stream, then you can operate line-by-line to format those outputs.
while IFS=$'\t' read -r by_ip by_hour by_min; do
printf '%-30b %-30b %-30b\n' "$by_ip" "$by_hour" "$by_min"
done < <(paste <(hitsbyip) <(hitsbyhour) <(hitsbymin 10))
Elements to note:
<() syntax is process substitution; it generates a filename (typically of the form /dev/fd/## on platforms with such support) which will, when read, yield the output of the command given.
paste takes a series of filenames and puts the output of each alongside the others.
Setting IFS=$'\t' while reading ensures that we read content as tab-separated values (the format paste creates). See BashFAQ #1 for details on using read.
Putting quotes on the arguments to printf ensures that we pass each value assembled by read as a single value to printf, rather than letting them be subject to string-splitting and glob-expansion as unquoted values.
Related
I have a text file with a large amount of data which is tab delimited. I want to have a look at the data such that I can see the unique values in a column. For example,
Red Ball 1 Sold
Blue Bat 5 OnSale
...............
So, its like the first column has colors, so I want to know how many different unique values are there in that column and I want to be able to do that for each column.
I need to do this in a Linux command line, so probably using some bash script, sed, awk or something.
What if I wanted a count of these unique values as well?
Update: I guess I didn't put the second part clearly enough. What I wanted to do is to have a count of "each" of these unique values not know how many unique values are there. For instance, in the first column I want to know how many Red, Blue, Green etc coloured objects are there.
You can make use of cut, sort and uniq commands as follows:
cat input_file | cut -f 1 | sort | uniq
gets unique values in field 1, replacing 1 by 2 will give you unique values in field 2.
Avoiding UUOC :)
cut -f 1 input_file | sort | uniq
EDIT:
To count the number of unique occurences you can make use of wc command in the chain as:
cut -f 1 input_file | sort | uniq | wc -l
awk -F '\t' '{ a[$1]++ } END { for (n in a) print n, a[n] } ' test.csv
You can use awk, sort & uniq to do this, for example to list all the unique values in the first column
awk < test.txt '{print $1}' | sort | uniq
As posted elsewhere, if you want to count the number of instances of something you can pipe the unique list into wc -l
Assuming the data file is actually Tab separated, not space aligned:
<test.tsv awk '{print $4}' | sort | uniq
Where $4 will be:
$1 - Red
$2 - Ball
$3 - 1
$4 - Sold
# COLUMN is integer column number
# INPUT_FILE is input file name
cut -f ${COLUMN} < ${INPUT_FILE} | sort -u | wc -l
Here is a bash script that fully answers the (revised) original question. That is, given any .tsv file, it provides the synopsis for each of the columns in turn. Apart from bash itself, it only uses standard *ix/Mac tools: sed tr wc cut sort uniq.
#!/bin/bash
# Syntax: $0 filename
# The input is assumed to be a .tsv file
FILE="$1"
cols=$(sed -n 1p $FILE | tr -cd '\t' | wc -c)
cols=$((cols + 2 ))
i=0
for ((i=1; i < $cols; i++))
do
echo Column $i ::
cut -f $i < "$FILE" | sort | uniq -c
echo
done
This script outputs the number of unique values in each column of a given file. It assumes that first line of given file is header line. There is no need for defining number of fields. Simply save the script in a bash file (.sh) and provide the tab delimited file as a parameter to this script.
Code
#!/bin/bash
awk '
(NR==1){
for(fi=1; fi<=NF; fi++)
fname[fi]=$fi;
}
(NR!=1){
for(fi=1; fi<=NF; fi++)
arr[fname[fi]][$fi]++;
}
END{
for(fi=1; fi<=NF; fi++){
out=fname[fi];
for (item in arr[fname[fi]])
out=out"\t"item"_"arr[fname[fi]][item];
print(out);
}
}
' $1
Execution Example:
bash> ./script.sh <path to tab-delimited file>
Output Example
isRef A_15 C_42 G_24 T_18
isCar YEA_10 NO_40 NA_50
isTv FALSE_33 TRUE_66
I'm trying to get as bash variable list of users which are in my csv file. Problem is that number of users is random and can be from 1-5.
Example CSV file:
"record1_data1","record1_data2","record1_data3","user1","user2"
"record2_data1","record2_data2","record2_data3","user1","user2","user3","user4"
"record3_data1","record3_data2","record3_data3","user1"
I would like to get something like
list_of_users="cat file.csv | grep "record2_data2" | <something> "
echo $list_of_users
user1,user2,user3,user4
I'm trying this:
cat file.csv | grep "record2_data2" | awk -F, -v OFS=',' '{print $4,$5,$6,$7,$8 }' | sed 's/"//g'
My result is:
user2,user3,user4,,
Question:
How to remove all "," from the end of my result? Sometimes it is just one but sometimes can be user1,,,,
Can I do it in better way? Users always starts after 3rd column in my file.
This will do what your code seems to be trying to do (print the users for a given string record2_data2 which only exists in the 2nd field):
$ awk -F',' '{gsub(/"/,"")} $2=="record2_data2"{sub(/([^,]*,){3}/,""); print}' file.csv
user1,user2,user3,user4
but I don't see how that's related to your question subject of Getting last X records from CSV file using bash so idk if it's what you really want or not.
Better to use a bash array, and join it into a CSV string when needed:
#!/usr/bin/env bash
readarray -t listofusers < <(cut -d, -f4- file.csv | tr -d '"' | tr ',' $'\n' | sort -u))
IFS=,
printf "%s\n" "${listofusers[*]}"
cut -d, -f4- file.csv | tr -d '"' | tr ',' $'\n' | sort -u is the important bit - it first only prints out the fourth and following fields of the CSV input file, removes quotes, turns commas into newlines, and then sorts the resulting usernames, removing duplicates. That output is then read into an array with the readarray builtin, and you can manipulate it and the individual elements however you need.
GNU sed solution, let file.csv content be
"record1_data1","record1_data2","record1_data3","user1","user2"
"record2_data1","record2_data2","record2_data3","user1","user2","user3","user4"
"record3_data1","record3_data2","record3_data3","user1"
then
sed -n -e 's/"//g' -e '/record2_data/ s/[^,]*,[^,]*,[^,]*,// p' file.csv
gives output
user1,user2,user3,user4
Explanation: -n turns off automatic printing, expressions meaning is as follow: 1st substitute globally " using empty string i.e. delete them, 2nd for line containing record2_data substitute (s) everything up to and including 3rd , with empty string i.e. delete it and print (p) such changed line.
(tested in GNU sed 4.2.2)
awk -F',' '
/record2_data2/{
for(i=4;i<=NF;i++) o=sprintf("%s%s,",o,$i);
gsub(/"|,$/,"",o);
print o
}' file.csv
user1,user2,user3,user4
This might work for you (GNU sed):
sed -E '/record2_data/!d;s/"([^"]*)"(,)?/\1\2/4g;s///g' file
Delete all records except for that containing record2_data.
Remove double quotes from the fourth field onward.
Remove any double quoted fields.
I have two C source files with lots of defines and I want to compare them to each other and filter out lines that do not match.
The grep (grep NO_BCM_ include/soc/mcm/allenum.h | grep -v 56440) output of the first file may look like:
...
...
# if !defined(NO_BCM_5675_A0)
# if !defined(NO_BCM_88660_A0)
# if !defined(NO_BCM_2801PM_A0)
...
...
where grep (grep "define NO_BCM" include/sdk_custom_config.h) of the second looks like:
...
...
#define NO_BCM_56260_B0
#define NO_BCM_5675_A0
#define NO_BCM_56160_A0
...
...
So now I want to find any type number in the braces above that are missing from the #define below. How do I best go about this?
Thank you
You could use an awk logic with two process-substitution handlers for grep
awk 'FNR==NR{seen[$2]; next}!($2 in seen)' FS=" " <(grep "define NO_BCM" include/sdk_custom_config.h) FS="[()]" <(grep NO_BCM_ include/soc/mcm/allenum.h | grep -v 56440)
# if !defined(NO_BCM_88660_A0)
# if !defined(NO_BCM_2801PM_A0)
The idea is the commands within <() will execute and produce the output as needed. The usage of FS before the outputs are to ensure the common entity is parsed with a proper-delimiter.
FS="[()]" is to capture $2 as the unique field in second-group and FS=" " for the default whitespace de-limiting on first group.
The core logic of awk is identifying not repeating elements, i.e. FNR==NR parses the first group storing the unique entries in $2 as a hash-map. Once all the lines are parsed, !($2 in seen) is executed on the second-group which means filter those lines whose $2 from second-group is not present in the hash created.
Use comm this way:
comm -23 <(grep NO_BCM_ include/soc/mcm/allenum.h | cut -f2 -d'(' | cut -f1 -d')' | sort) <(grep "define NO_BCM" include/sdk_custom_config.h | cut -f2 -d' ' | sort)
This would give tokens unique to include/soc/mcm/allenum.h.
Output:
NO_BCM_2801PM_A0
NO_BCM_88660_A0
If you want the full lines from that file, then you can use fgrep:
fgrep -f <(comm -23 <(grep NO_BCM_ include/soc/mcm/allenum.h | cut -f2 -d'(' | cut -f1 -d')' | sort) <(grep "define NO_BCM" include/sdk_custom_config.h | cut -f2 -d' ' | sort)) include/soc/mcm/allenum.h
Output:
# if !defined(NO_BCM_88660_A0)
# if !defined(NO_BCM_2801PM_A0)
About comm:
NAME
comm - compare two sorted files line by line
SYNOPSIS
comm [OPTION]... FILE1 FILE2
DESCRIPTION
Compare sorted files FILE1 and FILE2 line by line.
With no options, produce three-column output. Column one contains lines unique to FILE1, column two contains lines unique to
FILE2, and column three contains lines common to both files.
-1 suppress column 1 (lines unique to FILE1)
-2 suppress column 2 (lines unique to FILE2)
-3 suppress column 3 (lines that appear in both files)
It's hard to say without the surrounding context from your sample input files and no expected output but it sounds like this is all you need:
awk '!/define.*NO_BCM_/{next} NR==FNR{defined[$2];next} !($2 in defined)' include/sdk_custom_config.h FS='[()]' include/soc/mcm/allenum.h
I am writing a bash script to iterate through file lines with given value.
The command I am using to list the possible values is:
cat file.csv | cut -d';' -f2 | sort | uniq | head
When I use it in for loop like this it stops working:
for i in $( cat file.csv | cut -d';' -f2 | sort | uniq | head )
do
//do something else with these lines
done
How can I use piped commands in for loop?
You can use this awk command to get sum of 3rd column for each unique value of 2nd columns:
awk -F ';' '{sums[$2]+=$3} END{for (i in sums) print i ":", sums[i]}' file.csv
Input data:
asd;foo;0
asd;foo;2
asd;bar;1
asd;foo;4
Output:
foo: 6
bar: 1
Ok, so I need to create a command that lists the 100 most frequent words in any given file, in a block of text.
What I have at the moment:
$ alias words='tr " " "\012" <hamlet.txt | sort -n | uniq -c | sort -r | head -n 10'
outputs
$ words
14 the
14 of
8 to
7 and
5 To
5 The
5 And
5 a
4 we
4 that
I need it to output in the following format:
the of to and To The And a we that
((On that note, how would I tell it to print the output in all caps?))
And I need to change it so that I can pipe 'words' to any file, so instead of having the file specified within the pipe, the initial input would name the file & the pipe would do the rest.
Okay, taking your points one by one, though not necessarily in order.
You can change words to use standard input just by removing the <hamlet.txt bit since tr will take its input from standard input by default. Then, if you want to process a specific file, use:
cat hamlet.txt | words
or:
words <hamlet.txt
You can remove the effects of capital letters by making the first part of the pipeline:
tr '[A-Z]' '[a-z]'
which will lower-case your input before doing anything else.
Lastly, if you take that entire pipeline (with the suggested modifications above) and then pass it through a few more commands:
| awk '{printf "%s ", $2}END{print ""}'
This prints the second argument of each line (the word) followed by a space, then prints an empty string with terminating newline at the end.
For example, the following script words.sh will give you what you need:
tr '[A-Z]' '[a-z]' | tr ' ' '\012' | sort -n | uniq -c | sort -r
| head -n 3 | awk '{printf "%s ", $2}END{print ""}'
(on one line: I've split it for readability) as per the following transcript:
pax> echo One Two two Three three three Four four four four | ./words.sh
four three two
You can achieve the same end with the following alias:
alias words="tr '[A-Z]' '[a-z]' | tr ' ' '\012' | sort -n | uniq -c | sort -r
| head -n 3 | awk '{printf \"%s \", \$2}END{print \"\"}'"
(again, one line) but, when things get this complex, I prefer a script, if only to avoid interminable escape characters :-)