I want to generate list of users having malware in their public html
I am using avgscan for scanning,
/opt/avg/av/bin/avgscan -a -c --ignerrors --report=avoutput.txt
but it generates report like
/home/someuser/mail/info/cur/1395054106.H396740P84180,S=47470:2,S:/form_ident.rar Trojan horse Inject2.WPP
/home/someuser/public_html/__swift/files/attach_pq2ar348en1z435o5jhqy37de2xfb391 Trojan horse Zbot.BMI
I tried some thing but it didnt workout as list also have /backup folder which I don't want to be counted in list
I just need list of users, how to do?
#!/bin/bash
in="your_av_report file.txt" # in this case avoutput.txt
a=$(cat $in | grep -i "/mail/" | grep -v "/backup/" | cut -d'/' -f3 | awk '!a[$0]++' | uniq)
b=$cat $in | grep -i "/public_html/" | grep -v "/backup/" | cut -d'/' -f3 | awk '!a[$0]++' | uniq)
echo $a >>foo.txt
echo $b >>foo.txt
I hope it Helps :)
Related
I have this simple Shell Script where I am searching for ID and Port Number from the file and saving it in Array. However When I try to print them I am not getting expected results. I am looping the array to print the 1st and 2nd element and then increasing by two to print 3rd and 4th element. I also want to print them like each ID Port in separate line, like this:
ID Port
ID Port
My code is:
myarr=($(less radius-req | grep C4-3A-BE-18-C1-2D -B75 | grep '2018-11\|Port' | grep -v User | grep Source -B1 | awk -F "Port:|id=" '{print $2}' )); for ((i=0;i<"${#myarr[#]}";i+=2)) ; do echo $i; printf "%s\n" "${myarr[$i]}" "${myarr[$i+1]}" ; done;
Even If I try to echo the whole array I only see the last element, whereas I could print each individual element without an issue.
$ myarr=($(less radius-req | grep C4-3A-BE-18-C1-2D -B75 | grep '2018-11\|Port' | grep -v User | grep Source -B1 | awk -F "Port:|id=" '{print $2}' )); echo ${myarr[#]}
45210
$ myarr=($(less radius-req | grep C4-3A-BE-18-C1-2D -B75 | grep '2018-11\|Port' | grep -v User | grep Source -B1 | awk -F "Port:|id=" '{print $2}' )); echo ${myarr[0]}
19
$ myarr=($(less radius-req | grep C4-3A-BE-18-C1-2D -B75 | grep '2018-11\|Port' | grep -v User | grep Source -B1 | awk -F "Port:|id=" '{print $2}' )); echo ${myarr[1]}
45210
$ myarr=($(less radius-req | grep C4-3A-BE-18-C1-2D -B75 | grep '2018-11\|Port' | grep -v User | grep Source -B1 | awk -F "Port:|id=" '{print $2}' )); echo ${myarr[2]}
20
$ myarr=($(less radius-req | grep C4-3A-BE-18-C1-2D -B75 | grep '2018-11\|Port' | grep -v User | grep Source -B1 | awk -F "Port:|id=" '{print $2}' )); echo ${myarr[3]}
45210
From the output you give, I suspect that the problem is due to carriage return characters in the radius-req file. My guess is the file is from Windows (or maybe a web download), which uses carriage return + linefeed as a line terminator. Unix uses just linefeed (aka newline) as a terminator, and unix programs will treat the carriage return as part of the content of the line. Net result: you get things like "19<CR>" and "45210<CR>" as array values, and when you print them it prints them all over top of each other.
If I'm right about the problem, it's pretty easy to fix. Just replace less radius-req (which you shouldn't use anyway, see William Pursell's comment) with tr -d '\r' <radius-req. The tr command does character replacements, -d means just delete instead of replacing, and \r is its notation for the carriage return character. Result: it deletes the carriage returns before they have a chance to mess things up.
I have 2 query which give me the results:
service
number of service
example:
root#:~/elo# cat test | grep name | grep -v expand | cut -c 22- | rev | cut -c 3- | rev
service1
service2
root#:~/elo# cat test | grep customfield | cut -c 31- | rev | cut -c 2- | rev
2.3.4
55.66
I want to connect first value from first query with first value from second query etc. In this example should be:
service1:2.3.4
service2:55.66
Without a sample file, it is hard to write a working example. But I see, both values are from the same text file and the same line. Therefore I would use awk to do it:
$ cat text
service1;some_other_text;2.3.4
service2;just_some_text;55.66
$awk -F ";" '{printf "%s:%s\n", $1, $3}' test
service1:2.3.4
For a JSON file, it would be easier if you can use jg (e.g. apt-get install jg):
$ cat test.json
[
{
"name": "service1",
"customfield_10090": "1.2.3"
},
{
"name": "service2",
"customfield_10090": "23.3.2"
}
]
$jq '.[] | .name + ":" + .customfield_10090' test.json | sed 's/"//g'
service1:1.2.3
service2:23.3.2
The sed is necessary to eliminate the quotes.
You can use paste:
paste -d: <(grep name test| grep -v expand | cut -c 22- | rev | cut -c 3- | rev) \
<(grep customfield test | cut -c 31- | rev | cut -c 2- | rev)
But there might be better ways. If the input is json, you can probably use jq for a shorter and more efficient solution.
I have this line of code which I would like to hide its output.
Vrs=$(cat $(echo $line | awk -F"-" '{print "/var/AS-"$2"-"toupper($3)"-"$4}') | grep "YES" | cut -d":" -f5)
I have tried to include &> /dev/null at the end of the line but it doesn't work.
Does anyone know how to do this?
I am not exactly sure what you are trying to achieve, but your cat call looks redundant to me.
Vrs=$(echo "$line" | awk -F"-" '{print "/var/AS-"$2"-"toupper($3)"-"$4}' | grep "YES" | cut -d":" -f5)
You could rephrase the statement to
Vrs=$(echo "$line" | awk -F"-" '{print "/var/AS-"$2"-"toupper($3)"-"$4}' | grep "YES" | cut -d":" -f5)
This does the same thing. In the command is successful, you would get the result stored in Vrs. No output would be shown in the stdout. However, if you expect errors, you could do :
Vrs=$(echo "$line" | awk -F"-" '{print "/var/AS-"$2"-"toupper($3)"-"$4}' | grep "YES" | cut -d":" -f5 2>/dev/null)
This will suppress the errors and give you an empty $Vrs
Notes:
I have double quoted $line to prevent globbing and word splitting.
I am helping debug some code that exec the following script is there any reason why its not writing a file to the server? - if that what it does. All the $ data and permissions are ok:
Script:
#!/bin/bash
RANGE=$1
ALLOCATION=`echo $RANGE | cut -f1,2,3 -d'.'`
/sbin/ip rule add from $1 lookup $2
echo $ALLOCATION
rm /path/too/file/location/$ALLOCATION
for i in `seq 3 254`
do
echo $ALLOCATION.$i >> /path/too/file/location/$ALLOCATION
done
ETH=`/sbin/ifconfig | grep eth0 | tail -n1 | cut -f2 -d':' | cut -f1 -d' '`
I want to filter out datestamps from the message log and delete all occurances:
(basicly this is a part of an usb history cleaner script, head -n1 added only becouse of testing)
delimiter=`echo $HOSTNAME | cut -f1 -d.`
for item in `egrep usb /var/log/messages | awk -F"$delimiter" '{print $1}' | uniq | head -n1`; do
echo ${item}
done
when I run this command:
egrep usb /var/log/messages | awk -F"$delimiter" '{print $1}' | uniq | head -n1
the output is fine:
Mar 31 03:25:03
but when it will be given back to the for loop, the data transfers like this becouse of the spaces:
Mar
31
03:25:03
the question is: how can I prevent this kind of behaviour?
Instead of:
for item in `whatever`; do
echo ${item}
done
use:
whatever |
while IFS= read -r item; do
echo "${item}"
done
but your whole script could be re-written as just:
awk -F"${HOSTNAME%%.*}" '/usb/ && !seen[$1]++ {print $1}' /var/log/messages