Logstash do not send anything to ES - elasticsearch

I have IIS failedlog xml files and i am trying to read,parse and send to ES but my LS do not send anything.
I could not find any solution. Thx for your helps.
input {
file {
path => "C:\Users\name\Desktop\Log2\WebApplication2\*.xml"
}
}
filter {
xml {
source => "message"
store_xml => false
target => "target"
xpath => ["/failedRequest/_url/#text", "clasification"]
remove_field => "message"
}
}
filter {
mutate{add_field=>{"class"=>"%{target}"}}
}
output {
elasticsearch {
hosts => "localhost:9200"
index=>"logfromstash"
}
}
I edited the xml tree IIS is using freb.xsl , is that can cause an error ?
object{1}
failedRequest{16}
Event[161]
_xmlns:freb : http://schemas.microsoft.com/win/2006/06/iis/freb
_url : https://localhost:44324/api/employee/14
_siteId : 2
_appPoolId : Clr4IntegratedAppPool
_processId : 8240
_verb : GET
_remoteUserName :
_userName :
_tokenUserName : name
_authenticationType : anonymous
_activityId : {800000B5-0002-F900-B63F-84710C7967BB}
_failureReason : STATUS_CODE
_statusCode : 200
_triggerStatusCode : 200
_timeTaken : 47

Related

Elasticsearch, upsert a document with script when the index does not exist

I'm receiving some payloads in a logstash, that I push in Elastic in a monthly rolling index with a script that allows me to override the fields depending on the order of the status of those payloads.
Example :
{
"id" : "abc",
"status" : "OPEN",
"field1" : "foo",
"opening_ts" : 1234567
}
{
"id" : "abc",
"status" : "CLOSED",
"field1" : "bar",
"closing_ts": 7654321
}
I want that, even if i receive the payload OPEN after the CLOSE for the id "abc", my elastic document to be :
{
"_id" : "abc",
"status": "CLOSED",
"field1" : "bar",
"closing_ts": 7654321,
"opening_ts" : 1234567
}
I order to guarantee that, i have added a script in my elastic output plugin in logstash
script => "
if (ctx._source['status'] == 'CLOSED') {
for (key in params.event.keySet()) {
if (ctx._source[key] == null) {
ctx._source[key] = params.event[key]
}
}
} else {
for (key in params.event.keySet()) {
ctx._source[key] = params.event[key]
}
}
"
Buuuuut, adding this script also added an extra step between the implicit "PUT" on the index, and if the target index does not exist, the script will fail and the whole document will never be created. (Nor the index)
Do you know how could i handle an error in this scripts ?
You need to resort to scripted upsert:
output {
elasticsearch {
index => "your-index"
document_id => "%{id}"
action => "update"
scripted_upsert => true
script => "... your script..."
}
}

How to get ElasticSearch output?

I want to add my log document to ElasticSearch and, then I want to check the document in the ElasticSearch.
Following is the conntent of the log file :
Jan 1 06:25:43 mailserver14 postfix/cleanup[21403]: BEF25A72965: message-id=<20130101142543.5828399CCAF#mailserver14.example.com>
Feb 2 06:25:43 mailserver15 postfix/cleanup[21403]: BEF25A72999: message-id=<20130101142543.5828399CCAF#mailserver15.example.com>
Mar 3 06:25:43 mailserver16 postfix/cleanup[21403]: BEF25A72998: message-id=<20130101142543.5828399CCAF#mailserver16.example.com>
I am able to run my logstash instance with following logstast configuration file :
input {
file {
path => "/Myserver/mnt/appln/somefolder/somefolder2/testData/fileValidator-access.LOG"
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
filter {
grok {
patterns_dir => ["/Myserver/mnt/appln/somefolder/somefolder2/logstash/pattern"]
match => { "message" => "%{SYSLOGBASE} %{POSTFIX_QUEUEID:queue_id}: %{GREEDYDATA:syslog_message}" }
}
}
output{
elasticsearch{
hosts => "localhost:9200"
document_id => "test"
index => "testindex"
action => "update"
}
stdout { codec => rubydebug }
}
I have define my own grok pattern as :
POSTFIX_QUEUEID [0-9A-F]{10,11}
When I am running the logstash instance, I am successfully sending the data to elasticsearch, which gives following output :
Now, I have got the index stored in elastic search under testindex, but when I am using the curl -X GET "localhost:9200/testindex" I am getting following output :
{
"depositorypayin" : {
"aliases" : { },
"mappings" : { },
"settings" : {
"index" : {
"creation_date" : "1547795277865",
"number_of_shards" : "5",
"number_of_replicas" : "1",
"uuid" : "5TKW2BfDS66cuoHPe8k5lg",
"version" : {
"created" : "6050499"
},
"provided_name" : "depositorypayin"
}
}
}
}
This is not what is stored inside the index.I want to query the document inside the index.Please help. (PS: please forgive me for the typos)
The API you used above only returns information about the index itself (docs here). You need to use the Query DSL to search the documents. The following Match All Query will return all the documents in the index testindex:
curl -X GET "localhost:9200/testindex/_search" -H 'Content-Type: application/json' -d'
{
"query": {
"match_all": {}
}
}
'
Actually I have edited my config file whic look like this now :
input {
. . .
}
filter {
. . .
}
output{
elasticsearch{
hosts => "localhost:9200"
index => "testindex"
}
}
And now I am able to get fetch the data from elasticSearch using
curl 'localhost:9200/testindex/_search'
I don't know how it works, but it is now.
can anyone explain why ?

Logstash-5.6.0 and Elastic Search-6.2.1

I have the below configuration in logstash.conf,
Started my logstash with the following command `
logstash --verbose -f D:\ELK\logstash-5.6.0\logstash-5.6.0\logstash.conf`
and Elastic search is running at 9200 port but logstash is not pipelining the parsed log file contents into elastic search. did i miss any configuration ? or what am i doing wrong here.
input{
file{
path => "D:/server.log" start_position=> "beginning" type => "logs"
}
}
filter{
grok{
match => {'message'=>'\[%{TIMESTAMP_ISO8601:logtime}\]%{SPACE}%{LOGLEVEL:loglevel}%{SPACE}\[(?<threadname>[^\]]+)\]%{SPACE}%{WORD}\:%{WORD}\:%{WORD}%{SPACE}\(%{WORD:className}\.%{WORD}\:%{WORD}\)%{SPACE}\-%{SPACE}%{GREEDYDATA:errorDescription}'
'message1'=>'\[%{TIMESTAMP_ISO8601:logtime}\]%{SPACE}%{LOGLEVEL:loglevel}%{SPACE}\[(?<threadname>[^\]]+)\]%{SPACE}%{WORD}\:%{WORD}\:%{WORD}:%{WORD}%{SPACE}\(%{WORD:className}\.%{WORD}\:%{WORD}\)%{SPACE}\-%{SPACE}%{GREEDYDATA:errorDescription}'
'message2'=>'\[%{TIMESTAMP_ISO8601:logtime}\]%{SPACE}%{LOGLEVEL:loglevel}%{SPACE}\[(?<threadname>[^\]]+)\]%{SPACE}\(%{WORD:className}\.%{WORD}\:%{WORD}\)%{SPACE}\-%{SPACE}%{GREEDYDATA:errorDescription}'
}
add_field => {
'eventName' => 'grok'
}
}
}
output{
elasticsearch{
hosts=>["localhost:9200"]
index=>"tuesday"
}
}
here is my sample log content :
[2018-02-12 05:25:22,996] ERROR [VBH-1] (ClassA.java:55) - Could not process a new task
[2018-02-13 08:02:24,690] ERROR [CTY-2] C:31:cvbb09:0x73636711c67k4g2e (ClassB.java:159) - Calling command G Update on server http://localhost/TriggerDXFGeneration?null failed because server responded with http status 400 response was: ?<?xml version="1.0" encoding="utf-8"?>
[2018-02-13 08:02:24,690] DEBUG [BHU-2] C:31:cvbb09:0x73636711c67k4g2e (ClassC.java:836) - insertDxfProcessingQueue() called with ConfigID : FTCC08_0X5A3A7E222DD2171B
[2018-02-13 08:07:51,087] ERROR [http-apr-50101-exec-2] C:10:cvbb09 (ClassD.java:133) - Exception on TestScheduler():
It is failing to parse the log content.
{
"path" => "D://ELK/server.log",
"#timestamp" => 2018-02-19T16:01:12.083Z,
"#version" => "1",
"host" => "AAEINBLR05971L",
"message" => "[2018-02-13 08:02:24,690] DEBUG [BHU-2] C:31:cvbb09:0x73636711c67k4g2e (ClassC.java:836) - insertDxfProcessingQueue() called with ConfigID : FTCC08_0X5A3A7E222DD2171B\r",
"type" => "logs",
"tags" => [
[0] "_grokparsefailure"
]
}

How can I configure a custom field to be aggregatable in Kibana?

I am new to running the ELK stack. I have Logstash configured to feed my webapp log into Elasticsearch. I am trying to set up a visualization in Kibana that will show the count of unique users, given by the user_email field, which is parsed out of certain log lines.
I am fairly sure that I want to use the Unique Count aggregation, but I can't seem to get Kibana to include user_email in the list of fields which I can aggregate.
Here is my Logstash configuration:
filter {
if [type] == "wl-proxy-log" {
grok {
match => {
"message" => [
"(?<syslog_datetime>%{SYSLOGTIMESTAMP}\s+%{YEAR})\s+<%{INT:session_id}>\s+%{DATA:log_message}\s+license=%{WORD:license}\&user=(?<user_email>%{USERNAME}\#%{URIHOST})\&files=%{WORD:files}",
]
}
break_on_match => true
}
date {
match => [ "syslog_datetime", "MMM dd HH:mm:ss yyyy", "MMM d HH:mm:ss yyyy" ]
target => "#timestamp"
locale => "en_US"
timezone => "America/Los_Angeles"
}
kv {
source => "uri_params"
field_split => "&?"
}
}
}
output {
elasticsearch {
ssl => false
index => "wl-proxy"
manage_template => false
}
}
Here is the relevant mapping in Elasticsearch:
{
"wl-proxy" : {
"mappings" : {
"wl-proxy-log" : {
"user_email" : {
"full_name" : "user_email",
"mapping" : {
"user_email" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
}
}
}
}
}
}
}
Can anyone tell me what I am missing?
BTW, I am running CentOS with the following versions:
Elasticsearch Version: 6.0.0, Build: 8f0685b/2017-11-10T18:41:22.859Z, JVM: 1.8.0_151
Logstash v.6.0.0
Kibana v.6.0.0
Thanks!
I figured it out. The configuration was correct, AFAICT. The issue was that I simply hadn't refreshed the list of fields in the index in the Kibana UI.
Management -> Index Patterns -> Refresh Field List (the refresh icon)
After doing that, the field began appearing in the list of aggregatable terms, and I was able to create the necessary visualizations.

Mongo query to update field

i am using Mongo library(alex bibly) with codeignitor , and my collection like this
{
chart:{
"LL":[
{ "isEnable":true,
"userName":"Nishchit Dhanani"
}
]
}
}
I want to update isEnable=false .
Any help appreciated .
First of all you have an error in your JSON document. You can not have key values in a dictionary.
So your JSON should be like this:
{
"_id" : ObjectId("526e0d7ef6eca1c46462dfb7"), // I added this ID for querying. You do not need it
"chart" : {
"LL" : {
"isEnable" : false,
"userName" : "Nishchit Dhanani"
}
}
}
And to do what you needed you have to use $set
db.test.update(
{"_id" : ObjectId("526e0d7ef6eca1c46462dfb7")},
{$set : {"chart.LL.isEnable" : false}
})
With your new modification, you need to do something like this:
db.test.update(
{"chart.LL.isEnable" : false},
{$set : {"chart.LL.$.isEnable" : false}}
)

Resources