i need to get information, which users have connected to facebook from my proxy. I have a 2 loops.
#i have format og logs, like squid.log.12.10.2017
#with `ls squid.log*` i am working with all squid logs, day by day
for i in `ls squid.log*`; do
echo "There is log $i, that we need to check"
#i am getting count of ip addresses, that were on fb.com and i am writing them to ~/temp_cache
# like this:
# 25 192.168.110.5
# 41 192.168.110.2
# where 192.168.110.5 have connected to fb.com 25 times
# and 192.168.110.2 have connected to fb.com 41 times
zgrep fb.com /var/log/$i | cut -d " " -f1 | sort | uniq -c | sort -n -k 1 >> ~/temp_cache
#i am getting only list of ip, without count of connections to facebook
# like this:
# 192.168.110.5
# 192.168.110.2
ip=$(zgrep fb.com /var/log/$i | cut -d " " -f1 | sort | uniq -c | sort -n -k 1 | awk '{print $2}')
for y in $ip; do
echo "Users from $y:"
# i have system, that we are using for projects, in this system we have log ip-addresses and logins, from these ip addresses
# like for this ip 192.168.110.5, i am getting name of user duke
# main result, like this:
# duke
# the_rock
redmine_users=$(tail -n 500000 /usr/share/redmine/log/production.log | grep -A 3 "$y" | grep "Current user" | awk '{print $3}' | head -n 1)
# i am appending to lines, name of users for these lines
# in a result it should be like this:
# 25 192.168.110.5 duke
# 41 192.168.110.2 the rock
counter=$((counter+1))
sed -i "$counter s|$| $redmine_users |" ~/temp_cache
done
# Delimiter for each day of logs
echo "------------------------------------------------" >> ~/temp_cache
done
For the first looking it works. But it works only for one day. If script is going to second log, i mean squid.log.13.10.2017, it make something like this:
25 192.168.110.5 duke
41 192.168.110.2 the rock
______________________________ hogan
33 192.168.110.1
But i want to do this:
25 192.168.110.5 duke
41 192.168.110.2 the rock
______________________________
33 192.168.110.1 hogan
I tried to run script manually for one day, with existsing line ______________________________ and with changing
counter=$((counter+1))
sed -i "$counter s|$| $redmine_users |" ~/temp_cache
to
counter=1
counter=$((counter+1))
sed -i "$counter s|$| $redmine_users |" ~/temp_cache
But in a result i have:
______________________________
25 192.168.110.5 duke the rock
41 192.168.110.2
How to do, what i want, at least:
______________________________
25 192.168.110.5 duke
41 192.168.110.2 the rock
How to change counter in this construction:
counter=$((counter+1))
sed -i "$counter s|$| $redmine_users |" ~/temp_cache
If I understand correctly, you are trying to get it so the name is on the same line as the count and ip?
Assuming this is correct, you simply need to increase your counter after adding the ---- as well as where it currently is.
I do have multiple other issues with the code but I think this answers the main question. Let me know if I have missed the mark?
i have found. the counter should be in first loop. thanks for attention
Related
In a bash shell script, I want to go through a list of numbers and then print out the number that occurs most often. If there are several different numbers appearing an equal amount of times, I want to print the highest number. For example, in a file like this:
10
10
10
15
15
20
20
20
20
I want to print the value 20.
How can I achieve this?
If the numbers are in a file, one per line:
sort < myfile | uniq -c | sort -r | head -1
without the count:
A=$(sort < myfile | uniq -c | sort -r | head -1)
set $A
echo $2
You can use this command -
echo 10 10 10 15 15 20 20 20 20 | sed 's/ /\n/g' | sort | uniq -c | sort -V | tail -n 1 | awk '{print $2}'
It will print the number you want.
Normally when I do cat number.txt | sort -n | uniq -c , I get numbers like this:
3 43
4 66
2 96
1 97
But what I need is the number shows of occurrences at the back, like this:
43 3
66 4
96 2
97 1
Please give advice on how to change this. Thanks.
Use awk to change the order of columns:
cat number.txt | sort -n | uniq -c | awk '{ print $2, $1 }'
Perl version:
perl -lne '$occ{0+$_}++; END {print "$_ $occ{$_}" for sort {$a <=> $b} keys %occ}' < numbers.txt
Through GNU sed,
cat number.txt | sort -n | uniq -c | sed -r 's/^([0-9]+) ([0-9]+)$/\2 \1/g'
I'm trying to write a shell script that prints the full names of users logged on to a machine. The finger command gives me a list of users, but there are many duplicates. How can I loop through and print out only the unique ones?
Edit:
This is the format of what finger gives me:
xxxx XX of group XXX pts/59 1:00 Feb 13 16:38
xxxx XX of group XXX pts/71 1:11 Feb 13 16:27
xxxx XX of group XXX pts/105 1d Feb 12 15:22
xxxx YY of group YYY pts/102 2:19 Feb 13 14:13
xxxx ZZ of group ZZZ pts/42 2d Feb 7 12:11
I'm trying to extract the full name (i.e. whatever comes before 'of group' in column 2), so I would be using awk together with finger.
What you want is actually fairly difficult in a shell script, here is, for example, my full output of finger(1):
Login Name TTY Idle Login Time Office Phone
martin Martin Tournoij *v0 1d Wed 14:11
martin Martin Tournoij pts/2 22 Wed 15:37
martin Martin Tournoij pts/5 41 Thu 23:16
martin Martin Tournoij pts/7 31 Thu 23:24
martin Martin Tournoij pts/8 Thu 23:29
You want the full name, but this may contain 1 space (as per my example), or it may just be 'Teller' (no space), or it may be 'Captain James T. Kirk' (3 spaces). So you can't just use the space as delimiter. You could use the character position of 'TTY' in the header as an indicator, but that's not very elegant IMHO (especially with shell scripting).
My solution is therefore slightly different, we get only the username from finger(1), then we get the full name from /etc/passwd
#!/bin/sh
prev=""
for u in $(finger | tail +2 | cut -w -f1 | sort); do
[ "$u" = "$prev" ] && continue
echo "$u $(grep "^$u" /etc/passwd | cut -d: -f5)"
prev="$u"
done
Which gives me both the username & login name:
martin Martin Tournoij
Obviously, you can also print just the real name (without the $u).
The sort and uniq BinUtils commands can be used to removed duplicates.
finger | sort -u
This will remove all duplicate lines, but you will still see similar lines due to how verbose the finger command is. If you just want a list of usernames, you can filter it out further to be very specific.
finger | cut -d ' ' -f1 | sort -u
Now, you can take this one step further, and remove the "header/label" line printed out by the finger command.
finger | cut -d ' ' -f1 | sort -u | grep -iv login
Hope this helps.
Other possible solution:
finger | tail -n +2 | awk '{ print $1 }' | sort | uniq
tail -n +2 to omit the first line.
awk '{ print $1 }' to extract the first column.
sort to prepare input for uniq.
uniq remove duplicates.
If you want to iterate use:
for user in $(finger | tail -n +2 | awk '{ print $1 }' | sort | uniq)
do
echo "$user"
done
Could this be simpler?
No spaces or any other special characters to worry about!
finger -l | awk '/^Login/'
Edit: To remove the content after of group
finger -l | awk '/^Login/' | sed 's/of group.*//g'
Output:
Login: xx Name: XX
Login: yy Name: YY
Login: zz Name: ZZ
Suppose I have a file similar to as follows:
Abigail 85
Kaylee 25
Kaylee 25
kaylee
Brooklyn
Kaylee 25
kaylee 25
I would like to find the most repeated line, the output must be just the line.
I've tried
sort list | uniq -c
but I need clean output, just the most repeated line (in this example Kaylee 25).
Kaizen ~
$ sort zlist | uniq -c | sort -r | head -1| xargs | cut -d" " -f2-
Kaylee 25
does this help ?
IMHO, none of these answers will sort the results correctly. The reason is that sort, without the -n, option will sort like this "1 10 11 2 3 4", etc., instead of "1 2 3 4 10 11 12". So, add -n like so:
sort zlist | uniq -c | sort -n -r | head -1
You can then, of course, pipe that to either xargs or sed as described earlier.
awk -
awk '{a[$0]++; if(m<a[$0]){ m=a[$0];s[m]=$0}} END{print s[m]}' t.lis
$ uniq -c list | sort -r | head -1 | awk '{$1=""}1'
Kaylee 25
Is this what you're looking for?
I've a list of files in the following format:
Group_2012_01_06_041505.csv
Region_2012_01_06_041508.csv
Region_2012_01_06_070007.csv
XXXX_YYYY_MM_DD_HHMMSS.csv
What is the best way to compile a list of last generated file for each day per group from last 7 days list?
Version that worked on HP-UX
for d in 6 5 4 3 2 1 0
do
DATES[d]=$(perl -e "use POSIX;print strftime '%Y_%m_%d%',localtime time-86400*$d;")
done
for group in `ls *.csv | cut -d_ -f1 | sort -u`
do
CSV_FILES=$working_dir/*.csv
if [ ! -f $CSV_FILES ]; then
break # if no file exists do not attempt processing
fi
for d in "${DATES[#]}"
do
file_nm=$(ls ${group}_$d* 2>>/dev/null | sort -r | head -1)
if [ "$file_nm" != "" ]
then
# Process file
fi
done
done
You can explicitly iterate over the group/time combinations:
for d in {1..6}
do
DATES[d]=`gdate +"%Y_%m_%d" -d "$d day ago"`
done
for group in `ls *csv | cut -d_ -f1 | sort -u`
do
for d in "${DATES[#]}"
do
echo "$group $d: " `ls ${group}_$d* 2>>/dev/null | sort -r | head -1`
done
done
Which outputs the following for your example data set:
Group 2012_01_06: Group_2012_01_06_041505.csv
Group 2012_01_05:
Group 2012_01_04:
Group 2012_01_03:
Group 2012_01_02:
Group 2012_01_01:
Region 2012_01_06: Region_2012_01_06_070007.csv
Region 2012_01_05:
Region 2012_01_04:
Region 2012_01_03:
Region 2012_01_02:
Region 2012_01_01:
XXXX 2012_01_06:
XXXX 2012_01_05:
XXXX 2012_01_04:
XXXX 2012_01_03:
XXXX 2012_01_02:
XXXX 2012_01_01:
Note Region_2012_01_06_041508.csv is not shown for Region 2012_01_06 as it is older than Region_2012_01_06_070007.csv