Shell Script to parse log and Convert to csv - shell

I need a shell script to parse a log file and look for a certain pattern. if that paatern found, then take key values from that line and put it into a csv.
Example:
Here is the log file i have :
*webauthRedirect: Mar 24 08:57:50.903: #EMWEB-6-PARSE_ERROR: webauth_redirect.c:1034 parser exited. client mac= a0:88:b4:d3:55:8c bytes parsed = 0 and bytes read = 213
*webauthRedirect: Mar 24 08:57:50.903: #EMWEB-6-HTTP_REQ_BEGIN_ERR: http_parser.c:579 http request should begin with a character
***ewmwebWebauth1: Mar 04 11:33:46.870: #PEM-6-GUESTIN: pem_api.c:7851 Guest user logged in with user account (mrathi_dev) MAC address 00:1e:65:39:10:8e, IP address 192.168.133.146.**
*ewmwebWebauth1: Mar 04 11:33:46.870: #AAA-5-AAA_AUTH_NETWORK_USER: aaa.c:2178 Authentication succeeded for network user 'mrathi_dev'
*ewmwebWebauth1: Mar 04 11:33:46.858: #APF-6-USER_NAME_CREATED: apf_ms.c:6532 Username entry (mrathi_dev) with length (10) created for mobile 00:1e:65:39:10:8e
*mmListen: Mar 24 08:57:49.030: #APF-6-RADIUS_OVERRIDE_DISABLED: apf_ms_radius_override.c:1085 Radius overrides disabled, ignoring source 4
*webauthRedirect: Mar 24 08:57:47.008: #EMWEB-6-PARSE_ERROR: webauth_redirect.c:1034 parser exited. client mac= 5c:a:5b:60:f1:a7 bytes parsed = 0 and bytes read = 440
*webauthRedirect: Mar 24 08:57:47.008: #EMWEB-6-HTTP_REQ_BEGIN_ERR: http_parser.c:579 http request should begin with a character
*webauthRedirect: Mar 24 08:57:45.453: #EMWEB-6-PARSE_ERROR: webauth_redirect.c:1034 parser exited. client mac= 5c:a:5b:60:f1:a7 bytes parsed = 0 and bytes read = 440
*webauthRedirect: Mar 24 08:57:45.453: #EMWEB-6-HTTP_REQ_BEGIN_ERR: http_parser.c:579 http request should begin with a character
All I am interested in is the #PEM-6-GUESTIN line. I need to take the user id , mac and IP address from this line and put it in a csv. Only log lines with that status are required.
This is my first time working with shell scripts and all your help would be appreciated.

I think it is easier using grep to filter + sed to get groups using regex:
grep "#PEM-6-GUESTIN" log.txt | sed -r "s/.*user account \((.*)\).* MAC address (.*), IP address (.*)\.\*\*.*/\1,\2,\3/"
And the output is in CSV format:
mrathi_dev,00:1e:65:39:10:8e,192.168.133.146

Related

Perl's retrieval of file create time incorrect

I am attempting to use perl to rename files based on the folder they are in and the time created. Files GOPR1521.MP4 and GOPR7754.MP4 were created on two different cameras at the same time and date, and I want to be able their names to indicate that. For example .../GoProTravisL/GOPR1521.mp4 created at 12:32:38 should become 123238L_GOPR1520.mp4, and GOPR7754.MP4 becomes 123239R_GOPR7754.MP4. Right now the only problem is the time stamps. I would think its a problem with being wrong timezone or hour offset, but the minutes are off too. Is there something in perl I am missing when getting time stamps? Below is the perl code, what it outputs for times for each file, and what Finder on OS X says the creation times are.
Code:
#!/usr/bin/perl
use Time::Piece;
use File::stat;
use File::Find;
use File::Basename;
use File::Spec;
#files = <$ARGV[0]/>;
find({ wanted => \&process_file, no_chdir => 1 }, #files);
sub process_file {
my($filename, $dirs, $suffix) = fileparse($_,qr/\.[^.]*/);
if ((-f $_) && ($filename ne "" )) {
#print "\n\nThis is a file: $_";
#print "\nFile: $filename";
#print "\nDIR: $dirs";
my(#parsedirs) = File::Spec->splitdir($dirs);
my #strippeddirs;
foreach my $element ( #parsedirs ) {
push #strippeddirs, $element if defined $element and $element ne '';
}
$pardir = pop(#strippeddirs);
#print "\nParse DIR: ", $pardir;
#print "\nFile creation time: ";
$timestamp = localtime(stat($_)->ctime)->strftime("%H%M%S"); #gives time stamp
print $timestamp;
$newname = $timestamp . substr($pardir,-1) ."_". $filename . $suffix;
print "\nRename: $dirs$filename$suffix to $dirs$newname\n";
#rename ($dirs . $filename . $suffix,$dirs . $newname) || die ( "Error in renaming: " . $! );
} else {
print "\n\nThis is not file: $_\n";
}
}
Output of time stamps for each file:
/Volumes/Scratch/Raw/2016-03-21/GoProTravisL/
File: GOPR1520
File creation time: 05-55-21
File: GOPR1521
File creation time: 05-56-18
File: GOPR1522
File creation time: 05-57-44
File: GOPR1523
File creation time: 05-58-49
File: GP011520
File creation time: 05-59-53
/Volumes/Scratch/Raw/2016-03-21/GoProTravisR
File: GOPR7754
File creation time: 06-02-48
File: GOPR7755
File creation time: 06-04-19
File: GOPR7756
File creation time: 06-06-27
File: GOPR7757
File creation time: 00-06-16
File: GP017754
File creation time: 00-19-30
File: GP027754
File creation time: 00-22-20
Actual file times using ls:
MacTravis:2016-03-21 travis$ ls -lR /Volumes/Scratch/Raw/2016-03-21
total 0
drwxr-xr-x 8 travis admin 272 Apr 9 21:25 GoProTravisL
drwxr-xr-x 9 travis admin 306 Apr 9 21:25 GoProTravisR
/Volumes/Scratch/Raw/2016-03-21/GoProTravisL:
total 21347376
-rw------- 1 travis admin 4001240088 Mar 21 12:04 GOPR1520.MP4
-rw------- 1 travis admin 1447364149 Mar 21 12:31 GOPR1521.MP4
-rw------- 1 travis admin 2140532053 Mar 21 12:45 GOPR1522.MP4
-rw------- 1 travis admin 1649133454 Mar 21 13:00 GOPR1523.MP4
-rw------- 1 travis admin 1691562945 Mar 21 12:21 GP011520.MP4
/Volumes/Scratch/Raw/2016-03-21/GoProTravisR:
total 31941008
-rw------- 1 travis admin 4001129586 Mar 21 12:04 GOPR7754.MP4
-rw------- 1 travis admin 2166255754 Mar 21 12:31 GOPR7755.MP4
-rw------- 1 travis admin 3202301883 Mar 21 12:45 GOPR7756.MP4
-rw------- 1 travis admin 2466803806 Mar 21 12:08 GOPR7757.MP4
-rw------- 1 travis admin 4001257192 Mar 21 11:27 GP017754.MP4
-rw------- 1 travis admin 516025454 Mar 21 11:29 GP027754.MP4
ctime is the "time of last status change", which I believe is the time the inode was last modified. It is NOT the file's creation time[1]. ls lists the file modification time, so simply change from using ctime to using mtime.
Historically, the time at which a file was created wasn't tracked by file systems used on unix file systems. Some newer file systems track it, but I am unsure how to access it (nor is it needed here).

rsyslog - Avoid pushing certain logs to /var/log/messages

I'm having an ec2 linux server, and am tracking the logs of my application server using rsyslog so that I can push these logs to loggly.
The problem is, rsyslog is also logging these in /var/log/messages which I don't want. Is there any way to avoid this? Can I filter out certain messages in /etc/rsyslog.conf so that these are not pushed to var/log/messages?
****** UPDATE *******
I tried adding the following lines in rsyslog.conf:
if $programname == 'programName' then {
*.err /var/log/messages
} else {
*.info;mail.none;authpriv.none;cron.none /var/log/messages
}
However, upon restarting rsyslog, I see the following error:
Dec 11 08:01:46 <hostname> rsyslogd: the last error occured in /etc/rsyslog.conf, line 37:"if $programname == 'programName' then {"
Dec 11 08:01:46 <hostname> rsyslogd: warning: selector line without actions will be discarded
Dec 11 08:01:46 <hostname> rsyslogd-3000: unknown priority name "" [try http://www.rsyslog.com/e/3000 ]
Dec 11 08:01:46 <hostname> rsyslogd: the last error occured in /etc/rsyslog.conf, line 39:"} else {"
Dec 11 08:01:46 <hostname> rsyslogd: warning: selector line without actions will be discarded
Dec 11 08:01:46 <hostname> rsyslogd-3000: unknown priority name "" [try http://www.rsyslog.com/e/3000 ]
Dec 11 08:01:46 <hostname> rsyslogd: the last error occured in /etc/rsyslog.conf, line 41:"}"
Dec 11 08:01:46 <hostname> rsyslogd: warning: selector line without actions will be discarded
I suppose my version of rsyslog (5.8.10) doesn't support if / else. Is there any other way to do this?
Thanks.
first send the message to the file that you want.
then use stop to prevent further actions.
if $programname == 'apache2' then {
action(type="omfile" file="/var/log/apache2/rewrite.log" name="action-omfile-apache2-rewrite")
stop
}

How to decrease TCP connect() system call timeout?

In command below I enable file /dev/tcp/10.10.10.1/80 both for reading and writing and associate it with file descriptor 3:
$ time exec 3<>/dev/tcp/10.10.10.1/80
bash: connect: Operation timed out
bash: /dev/tcp/10.10.10.1/80: Operation timed out
real 1m15.151s
user 0m0.000s
sys 0m0.000s
This automatically tries to perform TCP three-way handshake. If 10.10.10.1 is not reachable as in example above, then connect system call tries to connect for 75 seconds. Is this 75 second timeout determined by bash? Or is this system default? Last but not least, is there a way to decrease this timeout value?
It's not possible in Bash without modifying the source as already mentioned, although here is the workaround by using timeout command, e.g.:
$ timeout 1 bash -c "</dev/tcp/stackoverflow.com/80" && echo Port open. || echo Port closed.
Port open.
$ timeout 1 bash -c "</dev/tcp/stackoverflow.com/81" && echo Port open. || echo Port closed.
Port closed.
Using this syntax, the timeout command will kill the process after the given time.
See: timeout --help for more options.
It is determined by TCP. It can be decreased on a per-socket basis by application code.
NB The timeout only takes effect if there is no response at all. If there is a connection refusal, the error occurs immediately.
No: there is no way of changing timeout by using /dev/tcp/
Yes, you could change default timeout for TCP connection in any programming language.
But, bash is not a programming language!
You could have a look into source code (see: Bash Homepage), you may find lib/sh/netopen.c file where you could read in _netopen4 function:
s = socket(AF_INET, (typ == 't') ? SOCK_STREAM : SOCK_DGRAM, 0);
You could read this file carefully, there are no consideration of connection timeout.
Without patching bash sources, there is no way of changing connection timeout by a bash script.
Simple HTTP client using netcat (near pure bash)
There is a little sample HTTP client written in pure bash, but using netcat:
#!/bin/bash
tmpfile=$(mktemp -p $HOME .netbash-XXXXXX)
exec 7> >(nc -w 3 -q 0 stackoverflow.com 80 >$tmpfile)
exec 6<$tmpfile
rm $tmpfile
printf >&7 "GET %s HTTP/1.0\r\nHost: stackoverflow.com\r\n\r\n" \
/questions/24317341/how-to-decrease-tcp-connect-system-call-timeout
timeout=100;
while ! read -t .001 -u 6 status ; do read -t .001 foo;done
echo STATUS: $status
[ "$status" ] && [ -z "${status//HTTP*200 OK*}" ] || exit 1
echo HEADER:
while read -u 6 -a head && [ "${head//$'\r'}" ]; do
printf "%-20s : %s\n" ${head%:} "${head[*]:1}"
done
echo TITLE:
sed '/<title>/s/<[^>]*>//gp;d' <&6
exec 7>&-
exec 6<&-
This could render:
STATUS: HTTP/1.1 200 OK
HEADER:
Cache-Control : private
Content-Type : text/html; charset=utf-8
X-Frame-Options : SAMEORIGIN
X-Request-Guid : 46d55dc9-f7fe-425f-a560-fc49d885a5e5
Content-Length : 91642
Accept-Ranges : bytes
Date : Wed, 19 Oct 2016 13:24:35 GMT
Via : 1.1 varnish
Age : 0
Connection : close
X-Served-By : cache-fra1243-FRA
X-Cache : MISS
X-Cache-Hits : 0
X-Timer : S1476883475.343528,VS0,VE100
X-DNS-Prefetch-Control : off
Set-Cookie : prov=ff1129e3-7de5-9375-58ee-5f739eb73449; domain=.stackoverflow.com; expires=Fri, 01-Jan-2055 00:00:00 GMT; path=/; HttpOnly
TITLE:
bash - How to decrease TCP connect() system call timeout? - Stack Overflow
Some explanations:
We create first a temporary file (under private directory for security reason), bind and delete before using them.
$ tmpfile=$(mktemp -p $HOME .netbash-XXXXXX)
$ exec 7> >(nc -w 3 -q 0 stackoverflow.com 80 >$tmpfile)
$ exec 6<$tmpfile
$ rm $tmpfile
$ ls $tmpfile
ls: cannot access /home/user/.netbash-rKvpZW: No such file or directory
$ ls -l /proc/self/fd
lrwx------ 1 user user 64 Oct 19 15:20 0 -> /dev/pts/1
lrwx------ 1 user user 64 Oct 19 15:20 1 -> /dev/pts/1
lrwx------ 1 user user 64 Oct 19 15:20 2 -> /dev/pts/1
lr-x------ 1 user user 64 Oct 19 15:20 3 -> /proc/30237/fd
lr-x------ 1 user user 64 Oct 19 15:20 6 -> /home/user/.netbash-rKvpZW (deleted)
l-wx------ 1 user user 64 Oct 19 15:20 7 -> pipe:[2097453]
$ echo GET / HTTP/1.0$'\r\n\r' >&7
$ read -u 6 foo
$ echo $foo
HTTP/1.1 500 Domain Not Found
$ exec 7>&-
$ exec 6>&-

why xinetd can't run shell service

guys,i have a problem on using xinetd,the error message is 'xinetd[20126]: execv( /home/fulu/download/mysqlchk_status2.sh ) failed: Exec format error (errno = 8)'
the system operation is : CentOS release 6.2;
i installed the xinetd by the command 'sudo yum install xinetd'
i edited the /etc/services, add my port 6033 for my service named 'mysqlchk'
the service 'mysqlchk' in /etc/xinetd.d/mysqlchk is
service mysqlchk
{
disable = no
flags = REUSE
socket_type = stream
port = 6033
wait = no
user = fulu
server = /home/fulu/download/mysqlchk_status2.sh
log_on_failure += USERID
}
the shell file /home/fulu/download/mysqlchk_status2.sh content is
echo 'test'
6.i can run the command /home/fulu/download/mysqlchk_status2.sh straightly and get the result 'test'
when i telnet 127.0.0.1 6033,i get the output
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
Connection closed by foreign host.
then i tail the log file /var/log/messages,it shows
Apr 22 22:01:47 AY1304111122016 xinetd[20001]: START: mysqlchk pid=20126 from=127.0.0.1
Apr 22 22:01:47 AY1304111122016 xinetd[20126]: execv( /home/fulu/download/mysqlchk_status2.sh ) failed: Exec format error (errno = 8)
Apr 22 22:01:47 AY1304111122016 xinetd[20001]: EXIT: mysqlchk status=0 pid=20126 duration=0(sec)
i don't know why,can anybody help me ?
I'm sorry, after questioning it i suddenly found the answer. If you want the shell to be run in other program you need add '#!/bin/echo' at the first line of the shell file (of course the echo can be changed)

Ruby CSV login counter script

I'm looking for a ruby script that will use something like last to count the login frequency of each user and output it to a csv file so I can make a bar graph with most frequent logins.
I want it to save the CSV output like this:
user2,19
user6,20
user3,18
Normally last looks like this:
user3 :1001 192.1.20.17 Sun Nov 30 15:01 still logged in
user8 :1000 192.1.20.15 Sun Nov 30 10:00 - 11:52 (01:52)
user2 tty7 :0 Tue Nov 25 19:43 - 21:09 (01:25)
user0 tty7 :0 Tue Nov 25 16:46 - 18:06 (01:19)
Is there something that already does this, or how can I do this?
maybe this way:
file = `last`
hash = {}
file.each_line { |x| hash[x.split(" ")[0].split(" ")[0]] = 0 unless hash[x.split(" ")[0].split(" ")[0]]; hash[x.split(" ")[0].split(" ")[0]] += 1 }
output = ""
hash.each_pair { |key, value| output += "#{key},#{value}\n" }
File.open('last.csv', 'w') {|f| f.write(output) }
you should check content of "" in split to be correct.
also you should check exact output of your last command - you maybe need to take some of the garbage out before writing to file ;)

Resources