How to read /var/log/wtmp logs in elasticsearch - elasticsearch

I am trying to read the access log s from /var/log/wtmp in elasticsearch
I can read the file when logged into the box by using last -F /var/log/wtmp
I have logstash running and sending logs to elasticsearch, here is logstash conf file.
input {
file {
path => "/var/log/wtmp"
start_position => "beginning"
}
}
output {
elasticsearch {
host => localhost
protocol => "http"
port => "9200"
}
}
what is showing in elasticsearch is
G

Once i opened the file using less , i could only see binary data.
Now logstash cant understand this data.
A logstash file like the following should work fine -
input {
pipe {
command => "/usr/bin/last -f /var/log/wtmp"
}
}
output {
elasticsearch {
host => localhost
protocol => "http"
port => "9200"
}
}

Vineeth's answer is right but the following cleaner config works as well:
input { pipe { command => "last" } }
last /var/log/wtmp and last are exactly the same.
utmp, wtmp, btmp are Unix files that keep track of user logins and logouts. They cannot be read directly because they are not regular text files. However, there is the last command which displays the information of /var/log/wtmp in plain text.
$ last --help
Usage:
last [options] [<username>...] [<tty>...]
I can read the file when logged into the box by using last -F /var/log/wtmp
I doubt that. What the -F flag does:
-F, --fulltimes print full login and logout times and dates
So, last -F /var/log/wtmp will interpret /var/log/wtmp as a username and won't print any login information.
What the -f flag does:
-f, --file <file> use a specific file instead of /var/log/wtmp

Related

Trying to set logstash conf file in docker-compose.yml on Mac OS

Here is what I have specified in my yml for the logstash. I've tried multiple variations of quotes, no quotes, etc:
volumes:
- "./logstash:/etc/logstash/conf:ro"
command:
- "logstash -f /etc/logstash/conf/simplels.conf"
And simplels.conf contains this:
input {
stdin{}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
}
stdout{}
}
Overall file structure is this, I'm running docker-compose up from the docker folder and getting Exit 1 on the Logstash container due to my 'command' parameter:
/docker:
docker-compose.yml
/logstash
simplels.conf

Not able to see newly added log in docker ELK

I'm using sebp/elk's dockerised ELK. I've managed to get it running on my local machine and I'm trying to input dummy log data by SSH'ing into the docker container and running:
/opt/logstash/bin/logstash --path.data /tmp/logstash/data \
-e 'input { stdin { } } output { elasticsearch { hosts => ["localhost"] } }'
After typing in some random text, I cannot see it indexed by elasticsearch when I visit http://localhost:9200/_search?pretty&size=1000

Logstash 6.5.4 executing bash script in filter ruby plugin

I am attempting to execute a bash script using Logstash using filter ruby plugin.
My bash code:
#!/bin/bash
name=$1 log_file="/tmp/test.json"
if [[ -n "$name" ]]; then
echo "$1=$( date +%s )" >> ${log_file} else
echo "argument error" fi
My Logstash config:
input {
file{
path => "/var/log/nginx/access.log"
start_position => "beginning"
codec => "json"
type => "json"
sincedb_write_interval => "1"
discover_interval => "1"
}
}
filter {
if [host] {
ruby { code => '
require "open3"
host = event.get("host")
cmd = "/bin/bash /etc/logstash/conf.d/bash_scripts/execute.sh #{host}"
stdin, stdout, stderr = Open3.popen3(cmd) '
}
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "bash_execute"
}
}
No errors, the events from the file just go and the bash script does not get triggered. I do not really care about any event confirmation or something like that, just want to execute arbitrary bash script that can accept an external argument from the Logstash event.
Can anyone assist?

Logstash not matching the pattern

I was learning logstash. Have a very simple config file..
input {
file {
path => "D:\b.log"
start_position => beginning
}
}
# The filter part of this file is commented out to indicate that it is
# optional.
filter {
grok {
match => { "message" => "%{LOGLEVEL:loglevel}" }
}
}
output {
stdout { codec => rubydebug }
}
The input file is just this:
INFO
I am running logstash on windows and the command is
logstash -f logstash.conf
I expect the output to be shown on the console to ensure that its working. But logstash produces no output, just the logstash config messages..
D:\Installables\logstash-2.0.0\logstash-2.0.0\bin>logstash -f logstash.conf
io/console not supported; tty will not be manipulated
Default settings used: Filter workers: 2
Logstash startup completed
I have deleted the sincedb file and tried. Is there something that i am missing?
I think this answers your question:
How to force Logstash to reparse a file?
It looks like you are missing the quotes around "beginning" and the other post recommends redirecting sincedb to dev/null. I don't know if there is a windows equivalent for that. I did use that as well, and it worked fine.
As an alternative, what I do now is to configure stdin() as input so that I don't have to worry about anything else.

input as file path in logstash config

When I run a command like this(on a Windows System):
logstash agent -f logstash-simple.conf
When the logstash config file had input as stdin{} it gave the expected output but when the input was a path to the input file (file{path=>})
it didn't give any output.
Here is my config(logstash-simple.conf) file:
input {
file{
type=>"syslog"
path=>["C:/Users/Administrator/Downloads/syslog.txt"]
}
}
output {
stdout {
codec => rubydebug
}
}
If you have an existing file that you are looking to load, you'll need to add
start_postition => beginning
to your file input.
I had the same problem.
You should have an empty line at the end of the file!
that worked for me

Resources