rsyslog server generating log files named for IP address instead of access_log - rsyslog

I have a syslog-ng server configured to send all apache log messages to a remote rsyslog server. Here are the pertinent part of my syslog-ng server's config:
source s_http {
file("/var/log/httpd/access_log" flags(no-parse));
};
...
destination loghost { tcp("10.0.0.48" port(514)); };
...
log { source(s_http); destination(loghost); };
I was hoping to find on the remote rsyslog server (10.0.0.48) the file: /apps/log/my-web-server/access_log. but instead I find several files in the /apps/log/my-web-server/ named for the IP address of the clients that hit my-web-server with a .log extension.
[root#10.0.0.48]# pwd
/apps/log/my-web-server
[root#10.0.0.48]# ls -l
total 140
-rw-------. 1 root root 4862 Aug 14 16:39 10.0.0.97.log
-rw-------. 1 root root 193 Aug 14 15:45 10.0.0.201.log
Why aren't the log messages going into one file named access_log?
Update:
On the rsyslog server at 10.0.0.48 I see these lines in the /etc/rsyslog.conf
$template RemoteStore, "/apps/log/%HOSTNAME%/%PROGRAMNAME%.log"
$template RemoteStoreFormat, "%msg%\n"
:source, !isequal, "localhost" -?RemoteStore;RemoteStoreFormat
:source, isequal, "last" STOP
what does that mean?

I needed to change ...
source s_http {
file("/var/log/httpd/access_log" flags(no-parse));
};
... to this ...
source s_http {
file("/var/log/httpd/access_log" program-override("apache_access_log"));
};

Related

Elasticsearch 1.7.4 log rotation

I'm trying to use logging.yml ( Elasticsearch file ) + logrotate configuration for elasticsearch log rotation .
Information :
1 . Elasticsearch version - 1.7.4
I don't want to keep any rotated files ...
Configuration :
logging.yml configuration :
file:
type: org.apache.log4j.rolling.RollingFileAppender
file: ${path.logs}/${cluster.name}.log
rollingPolicy: org.apache.log4j.rolling.TimeBasedRollingPolicy
rollingPolicy.FileNamePattern: ${path.logs}/${cluster.name}.log.%d{yyyy-MM-dd}.gz
layout:
type: pattern
conversionPattern: "[%d{ISO8601}][%-5p][%-25c] %m%n"
Logrotate configuration :
/var/log/elasticsearch/*.log {
daily
rotate 0
copytruncate
compress
delaycompress
missingok
notifempty
maxage 0
create 644 elasticsearch elasticsearch
}
More details :
Running ls on /var/log/elasticsearch :
total 20K
-rw-r--r-- 1 elasticsearch elasticsearch 18763 Jul 4 08:46 dba01es.d1.log
-rw-r--r-- 1 elasticsearch elasticsearch 0 Jun 19 10:01 dba01es.d1_index_indexing_slowlog.log
-rw-r--r-- 1 elasticsearch elasticsearch 0 Jun 19 10:01 dba01es.d1_index_search_slowlog.log
Running manually logrotate :
logrotate -fv /etc/logrotate.d/elasticsearch
logrotate output :
reading config file /etc/logrotate.d/elasticsearch
reading config info for /var/log/elasticsearch/*.log
Handling 1 logs
rotating pattern: /var/log/elasticsearch/*.log forced from command line (no old logs will be kept)
empty log files are not rotated, old logs are removed
considering log /var/log/elasticsearch/dba01es.d1.log
log needs rotating
considering log /var/log/elasticsearch/dba01es.d1_index_indexing_slowlog.log
log does not need rotating
considering log /var/log/elasticsearch/dba01es.d1_index_search_slowlog.log
log does not need rotating
rotating log /var/log/elasticsearch/dba01es.d1.log, log->rotateCount is 0
dateext suffix '-20160704'
glob pattern '-[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]'
previous log /var/log/elasticsearch/dba01es.d1.log.1 does not exist
renaming /var/log/elasticsearch/dba01es.d1.log.1.gz to /var/log/elasticsearch/dba01es.d1.log.2.gz (rotatecount 1, logstart 1, i 1),
old log /var/log/elasticsearch/dba01es.d1.log.1.gz does not exist
renaming /var/log/elasticsearch/dba01es.d1.log.0.gz to /var/log/elasticsearch/dba01es.d1.log.1.gz (rotatecount 1, logstart 1, i 0),
old log /var/log/elasticsearch/dba01es.d1.log.0.gz does not exist
log /var/log/elasticsearch/dba01es.d1.log.2.gz doesn't exist -- won't try to dispose of it
copying /var/log/elasticsearch/dba01es.d1.log to /var/log/elasticsearch/dba01es.d1.log.1
truncating /var/log/elasticsearch/dba01es.d1.log
Running ll after running logrotate manually :
total 32K
-rw-r--r-- 1 elasticsearch elasticsearch 0 Jul 4 08:48 dba01es.d1.log
-rw-r--r-- 1 elasticsearch elasticsearch 28937 Jul 4 08:48 dba01es.d1.log.1
-rw-r--r-- 1 elasticsearch elasticsearch 0 Jun 19 10:01 dba01es.d1_index_indexing_slowlog.log
-rw-r--r-- 1 elasticsearch elasticsearch 0 Jun 19 10:01 dba01es.d1_index_search_slowlog.log
My question are :
Why the dba01es.d1.log.1 file is not compressed ?
Why the rotate 0 is not working here ? and logrotate keep saving the rotate file ....
Thanks a lot !
Amit

rsyslog - Avoid pushing certain logs to /var/log/messages

I'm having an ec2 linux server, and am tracking the logs of my application server using rsyslog so that I can push these logs to loggly.
The problem is, rsyslog is also logging these in /var/log/messages which I don't want. Is there any way to avoid this? Can I filter out certain messages in /etc/rsyslog.conf so that these are not pushed to var/log/messages?
****** UPDATE *******
I tried adding the following lines in rsyslog.conf:
if $programname == 'programName' then {
*.err /var/log/messages
} else {
*.info;mail.none;authpriv.none;cron.none /var/log/messages
}
However, upon restarting rsyslog, I see the following error:
Dec 11 08:01:46 <hostname> rsyslogd: the last error occured in /etc/rsyslog.conf, line 37:"if $programname == 'programName' then {"
Dec 11 08:01:46 <hostname> rsyslogd: warning: selector line without actions will be discarded
Dec 11 08:01:46 <hostname> rsyslogd-3000: unknown priority name "" [try http://www.rsyslog.com/e/3000 ]
Dec 11 08:01:46 <hostname> rsyslogd: the last error occured in /etc/rsyslog.conf, line 39:"} else {"
Dec 11 08:01:46 <hostname> rsyslogd: warning: selector line without actions will be discarded
Dec 11 08:01:46 <hostname> rsyslogd-3000: unknown priority name "" [try http://www.rsyslog.com/e/3000 ]
Dec 11 08:01:46 <hostname> rsyslogd: the last error occured in /etc/rsyslog.conf, line 41:"}"
Dec 11 08:01:46 <hostname> rsyslogd: warning: selector line without actions will be discarded
I suppose my version of rsyslog (5.8.10) doesn't support if / else. Is there any other way to do this?
Thanks.
first send the message to the file that you want.
then use stop to prevent further actions.
if $programname == 'apache2' then {
action(type="omfile" file="/var/log/apache2/rewrite.log" name="action-omfile-apache2-rewrite")
stop
}

Shell Script to parse log and Convert to csv

I need a shell script to parse a log file and look for a certain pattern. if that paatern found, then take key values from that line and put it into a csv.
Example:
Here is the log file i have :
*webauthRedirect: Mar 24 08:57:50.903: #EMWEB-6-PARSE_ERROR: webauth_redirect.c:1034 parser exited. client mac= a0:88:b4:d3:55:8c bytes parsed = 0 and bytes read = 213
*webauthRedirect: Mar 24 08:57:50.903: #EMWEB-6-HTTP_REQ_BEGIN_ERR: http_parser.c:579 http request should begin with a character
***ewmwebWebauth1: Mar 04 11:33:46.870: #PEM-6-GUESTIN: pem_api.c:7851 Guest user logged in with user account (mrathi_dev) MAC address 00:1e:65:39:10:8e, IP address 192.168.133.146.**
*ewmwebWebauth1: Mar 04 11:33:46.870: #AAA-5-AAA_AUTH_NETWORK_USER: aaa.c:2178 Authentication succeeded for network user 'mrathi_dev'
*ewmwebWebauth1: Mar 04 11:33:46.858: #APF-6-USER_NAME_CREATED: apf_ms.c:6532 Username entry (mrathi_dev) with length (10) created for mobile 00:1e:65:39:10:8e
*mmListen: Mar 24 08:57:49.030: #APF-6-RADIUS_OVERRIDE_DISABLED: apf_ms_radius_override.c:1085 Radius overrides disabled, ignoring source 4
*webauthRedirect: Mar 24 08:57:47.008: #EMWEB-6-PARSE_ERROR: webauth_redirect.c:1034 parser exited. client mac= 5c:a:5b:60:f1:a7 bytes parsed = 0 and bytes read = 440
*webauthRedirect: Mar 24 08:57:47.008: #EMWEB-6-HTTP_REQ_BEGIN_ERR: http_parser.c:579 http request should begin with a character
*webauthRedirect: Mar 24 08:57:45.453: #EMWEB-6-PARSE_ERROR: webauth_redirect.c:1034 parser exited. client mac= 5c:a:5b:60:f1:a7 bytes parsed = 0 and bytes read = 440
*webauthRedirect: Mar 24 08:57:45.453: #EMWEB-6-HTTP_REQ_BEGIN_ERR: http_parser.c:579 http request should begin with a character
All I am interested in is the #PEM-6-GUESTIN line. I need to take the user id , mac and IP address from this line and put it in a csv. Only log lines with that status are required.
This is my first time working with shell scripts and all your help would be appreciated.
I think it is easier using grep to filter + sed to get groups using regex:
grep "#PEM-6-GUESTIN" log.txt | sed -r "s/.*user account \((.*)\).* MAC address (.*), IP address (.*)\.\*\*.*/\1,\2,\3/"
And the output is in CSV format:
mrathi_dev,00:1e:65:39:10:8e,192.168.133.146

why xinetd can't run shell service

guys,i have a problem on using xinetd,the error message is 'xinetd[20126]: execv( /home/fulu/download/mysqlchk_status2.sh ) failed: Exec format error (errno = 8)'
the system operation is : CentOS release 6.2;
i installed the xinetd by the command 'sudo yum install xinetd'
i edited the /etc/services, add my port 6033 for my service named 'mysqlchk'
the service 'mysqlchk' in /etc/xinetd.d/mysqlchk is
service mysqlchk
{
disable = no
flags = REUSE
socket_type = stream
port = 6033
wait = no
user = fulu
server = /home/fulu/download/mysqlchk_status2.sh
log_on_failure += USERID
}
the shell file /home/fulu/download/mysqlchk_status2.sh content is
echo 'test'
6.i can run the command /home/fulu/download/mysqlchk_status2.sh straightly and get the result 'test'
when i telnet 127.0.0.1 6033,i get the output
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
Connection closed by foreign host.
then i tail the log file /var/log/messages,it shows
Apr 22 22:01:47 AY1304111122016 xinetd[20001]: START: mysqlchk pid=20126 from=127.0.0.1
Apr 22 22:01:47 AY1304111122016 xinetd[20126]: execv( /home/fulu/download/mysqlchk_status2.sh ) failed: Exec format error (errno = 8)
Apr 22 22:01:47 AY1304111122016 xinetd[20001]: EXIT: mysqlchk status=0 pid=20126 duration=0(sec)
i don't know why,can anybody help me ?
I'm sorry, after questioning it i suddenly found the answer. If you want the shell to be run in other program you need add '#!/bin/echo' at the first line of the shell file (of course the echo can be changed)

ruby webrick and cgi

I can't get Webrick to work with the servlet HTTPServlet::CGIHandler--I get an EACCES error:
[2012-12-06 01:38:02] ERROR CGIHandler: /tmp/cgi-bin:
/Users/7stud/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/1.9.1/webrick/httpservlet/cgi_runner.rb:46:in `exec': Permission denied - /tmp/cgi-bin (Errno::EACCES)
from /Users/7stud/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/1.9.1/webrick/httpservlet/cgi_runner.rb:46:in `<main>'
[2012-12-06 01:38:02] ERROR CGIHandler: /tmp/cgi-bin exit with 1
[2012-12-06 01:38:02] ERROR Premature end of script headers: /tmp/cgi-bin
localhost - - [06/Dec/2012:01:38:02 MST] "GET /cgi/my_prog.cgi HTTP/1.1" 500 326
- -> /cgi/my_prog.cgi
Here are the permissions I set:
~/ruby_programs$ cd /
/$ ls -al tmp
lrwxr-xr-x# 1 root wheel 11 Jul 3 2011 tmp -> private/tmp
/$ cd tmp
/tmp$ ls -al
total 0
drwxrwxrwt 8 root wheel 272 Dec 6 01:08 .
drwxr-xr-x# 6 root wheel 204 Mar 27 2010 ..
drwxr-xr-x 3 7stud wheel 102 Dec 6 01:25 cgi-bin
/tmp$ cd cgi-bin/
/tmp/cgi-bin$ ls -al my_prog.cgi
-rwxr-xr-x 1 7stud wheel 123 Dec 6 01:09 my_prog.cgi
My server program(1.rb):
#!/usr/bin/env ruby
require 'webrick'
include WEBrick
port = 12_000
dir = Dir::pwd
server = HTTPServer.new(
:Port => port,
:DocumentRoot => dir + "/html"
)
server.mount("/cgi", HTTPServlet::CGIHandler, "/tmp/cgi-bin")
puts "Listening on port: #{port}"
Signal.trap('SIGINT') { server.shutdown }
server.start
Running my server program:
~/ruby_programs$ ruby 1.rb
[2012-12-06 01:37:58] INFO WEBrick 1.3.1
[2012-12-06 01:37:58] INFO ruby 1.9.3 (2012-04-20) [x86_64-darwin10.8.0]
Listening on port: 12000
[2012-12-06 01:37:58] INFO WEBrick::HTTPServer#start: pid=4260 port=12000
I entered this address in my browser:
http://localhost:12000/cgi/my_prog.cgi
This was displayed in my browser:
Internal Server Error
Premature end of script headers: /tmp/cgi-bin WEBrick/1.3.1
(Ruby/1.9.3/2012-04-20) at localhost:12000
Here's my cgi script(/tmp/cgi-bin/my_prog.cgi):
#!/usr/bin/env ruby
require 'cgi'
cgi = CGI.new
puts cgi.header
puts "<html><body>Hello Webrick</body></html>"
The only way I can get WEBrick to execute cgi files in a directory other than the root, is to use the HTTPServlet::FileHandler servlet:
port = 12_500
...
cgi_dir = File.expand_path("~/ruby_programs/cgi-bin")
server.mount("/cgi", HTTPServlet::FileHandler, cgi_dir)
Then the url used to execute a .cgi file located in the cgi_dir is:
http://localhost:12500/cgi/my_prog.cgi
Apparently, when you write:
server = HTTPServer.new(
:Port => port,
:DocumentRoot => "./html" #Regular files served/.cgi files executed out of this dir
)
Webrick automatically "mounts" an HTTPServlet::FileHandler to handle requests to the :DocumentRoot directory, e.g.
http://localhost:12500/my_html.htm
which will serve files out of the ./html directory (i.e. a directory called html located below the directory from which your program is running). The HTTPServlet::FileHandler will also execute files in that directory if they have a .cgi extension.
If you explicitly use mount() to add an HTTPServlet::FileHandler to another directory, e.g.
cgi_dir = File.expand_path("~/ruby_programs/cgi-bin")
server.mount("/cgi", HTTPServlet::FileHandler, cgi_dir)
then WEBrick will also serve files from that directory and execute files in that directory that have a .cgi extension.
I haven't found a way to configure WEBrick to only serve files out of the :DocumentRoot directory and only execute .cgi files in another directory.
See "Gnome's Guide to WEBrick" here:
http://microjet.ath.cx/webrickguide/html/
In my case I had similar problems because of incorrect file permissions and really incorrect headers. Permissions of CGI-script should be like this:
~/ruby_projects/cgi_webrick/cgi
ls -l
-rwxr-xr-x 1 me me 112 Sep 20 20:29 test.cgi
My server code looks very similar (placed in ~/ruby_projects/cgi_webrick/), but with different handler.
#!/usr/bin/env ruby
require 'webrick'
server = WEBrick::HTTPServer.new :Port => 1234
server.mount "/", WEBrick::HTTPServlet::FileHandler , './'
trap('INT') { server.stop }
server.start
If you run server script ruby my_server_script.cgi, it will serve any scripts from root or other directory. In my case I can access http://localhost:1234/cgi/test.cgi (script placed in cgi subfolder), and http://localhost:1234/test.cgi (placed in the root directory from which server is started).
My test script:
#!/usr/bin/ruby
require 'cgi'
cgi = CGI.new
puts cgi.header
puts "<html><body>This is a test</body></html>"

Resources