Logfile won't apear in elasticsearch - elasticsearch

I'm very new to logstash and elasticsearch, I am trying to stash my first log to logstash in a way that I can (correct me if it is not the purpose) search it using elasticsearch....
I have a log that looks like this basically:
2016-12-18 10:16:55,404 - INFO - flowManager.py - loading metadata xml
So, I have created a config file test.conf that looks like this:
input {
file {
path => "/home/usr/tmp/logs/mylog.log"
type => "test-type"
id => "NEWTRY"
}
}
filter {
grok {
match => { "message" => "%{YEAR:year}-%{MONTHNUM:month}-%{MONTHDAY:day} %{HOUR:hour}:%{MINUTE:minute}:%{SECOND:second} - %{LOGLEVEL:level} - %{WORD:scriptName}.%{WORD:scriptEND} - " }
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "ecommerce"
codec => line { format => "%{year}-%{month}-%{day} %{hour}:%{minute}:%{second} - %{level} - %{scriptName}.%{scriptEND} - \"%{message}\"" }
}
}
And then : ./bin/logstash -f test.conf
I do not see the log in elastic search when I go to: http://localhost:9200/ecommerce OR to http://localhost:9200/ecommerce/test-type/NEWTRY
Please tell me what am I doing wrong.... :\
Thanks,
Heather

I found a solution eventually-
I added both sincedb_path=>"/dev/null" (which from what I understood is for testing enviorment only) and start_position => "beginning" to the output file plugin and the file appeared both in elastic and in kibana
Thanks anyway for responding and trying to help!

Related

Logstash content based filtering, into multiple indexs

I am currently pulling JSON log files from an S3 bucket which contain different types of logs defined as RawLog, along with another value which is MessageSourceType (there are more metadata fields which I don't care about). Each line on the file is a separate log in case that makes a difference.
I currently have these all going into 1 index as seen in my config below, however, I ideally want to split these out into separate indexes. For example, if the MessageSourceType = Syslog - Linux Host then I need logstash to extract the RawLog as syslog and place it into an index called logs-syslog, whereas if the MessageSourceType = MS Windows Event Logging XML I want it to extract the RawLog as XML and place it in an index called logs-MS_Event_logs.
filter {
mutate {
replace => [ "message", "%{message}" ]
}
json {
source => "message"
remove_field => "message"
}
}
output {
elasticsearch {
hosts => ["http://xx.xx.xx.xx:xxxx","http://xx.xx.xx.xx:xxxx"]
index => "logs-received"
}
Also for a bit of context here is an example of one of the logs:
{"MsgClassTypeId":"3000","Direction":"0","ImpactedZoneEnum":"0","message":"<30>Feb 13 23:45:24 xx.xx.xx.xx Account=\"\" Action=\"\" Aggregate=\"False\" Amount=\"\" Archive=\"True\" BytesIn=\"\" BytesOut=\"\" CollectionSequence=\"825328\" Command=\"\" CommonEventId=\"3\" CommonEventName=\"General Operations\" CVE=\"\" DateInserted=\"2/13/2021 11:45:24 PM\" DInterface=\"\" DIP=\"\" Direction=\"0\" DirectionName=\"Unknown\" DMAC=\"\" DName=\"\" DNameParsed=\"\" DNameResolved=\"\" DNATIP=\"\" DNATPort=\"-1\" Domain=\"\" DomainOrigin=\"\" DPort=\"-1\" DropLog=\"False\" DropRaw=\"False\" Duration=\"\" EntityId=\"" EventClassification=\"-1\" EventCommonEventID=\"-1\" FalseAlarmRating=\"0\" Forward=\"False\" ForwardToLogMart=\"False\" GLPRAssignedRBP=\"-1\" Group=\"\" HasBeenInserted_EMDB=\"False\" HasBeenQueued_Archiving=\"True\" HasBeenQueued_EventProcessor=\"False\" HasBeenQueued_LogProcessor=\"True\" Hash=\"\" HostID=\"44\" IgnoreGlobalRBPCriteria=\"False\" ImpactedEntityId=\"0\" ImpactedEntityName=\"\" ImpactedHostId=\"-1\" ImpactedHostName=\"\" ImpactedLocationKey=\"\" ImpactedLocationName=\"\" ImpactedNetworkId=\"-1\" ImpactedNetworkName=\"\" ImpactedZoneEnum=\"0\" ImpactedZoneName=\"\" IsDNameParsedValue=\"True\" IsRemote=\"True\" IsSNameParsedValue=\"True\" ItemsIn=\"\" ItemsOut=\"\" LDSVERSION=\"1.1\" Login=\"\" LogMartMode=\"13627389\" LogSourceId=\"158\" LogSourceName=\"ip-xx-xx-xx-xx.eu-west-2.computer.internal Linux Syslog\" MediatorMsgID=\"0\" MediatorSessionID=\"1640\" MsgClassId=\"3999\" MsgClassName=\"Other Operations\" MsgClassTypeId=\"3000\" MsgClassTypeName=\"Operations\" MsgCount=\"1\" MsgDate=\"2021-02-13T23:45:24.0000000+00:00\" MsgDateOrigin=\"0\" MsgSourceHostID=\"44\" MsgSourceTypeId=\"88\" MsgSourceTypeName=\"Syslog - Linux Host\" NormalMsgDate=\"2021-02-13T23:45:24.0540000Z\" Object=\"\" ObjectName=\"\" ObjectType=\"\" OriginEntityId=\"0\" OriginEntityName=\"\" OriginHostId=\"-1\" OriginHostName=\"\" OriginLocationKey=\"\" OriginLocationName=\"\" OriginNetworkId=\"-1\" OriginNetworkName=\"\" OriginZoneEnum=\"0\" OriginZoneName=\"\" ParentProcessId=\"\" ParentProcessName=\"\" ParentProcessPath=\"\" PID=\"-1\" Policy=\"\" Priority=\"4\" Process=\"\" ProtocolId=\"-1\" ProtocolName=\"\" Quantity=\"\" Rate=\"\" Reason=\"\" Recipient=\"\" RecipientIdentity=\"\" RecipientIdentityCompany=\"\" RecipientIdentityDepartment=\"\" RecipientIdentityDomain=\"\" RecipientIdentityID=\"-1\" RecipientIdentityTitle=\"\" ResolvedImpactedName=\"\" ResolvedOriginName=\"\" ResponseCode=\"\" Result=\"\" RiskRating=\"0\" RootEntityId=\"9\" Sender=\"\" SenderIdentity=\"\" SenderIdentityCompany=\"\" SenderIdentityDepartment=\"\" SenderIdentityDomain=\"\" SenderIdentityID=\"-1\" SenderIdentityTitle=\"\" SerialNumber=\"\" ServiceId=\"-1\" ServiceName=\"\" Session=\"\" SessionType=\"\" Severity=\"\" SInterface=\"\" SIP=\"\" Size=\"\" SMAC=\"\" SName=\"\" SNameParsed=\"\" SNameResolved=\"\" SNATIP=\"\" SNATPort=\"-1\" SPort=\"-1\" Status=\"\" Subject=\"\" SystemMonitorID=\"9\" ThreatId=\"\" ThreatName=\"\" UniqueID=\"7d4c4ed3-a2fc-44bc-a7ec-0b8b68e7f456\" URL=\"\" UserAgent=\"\" UserImpactedIdentity=\"\" UserImpactedIdentityCompany=\"\" UserImpactedIdentityDomain=\"\" UserImpactedIdentityID=\"-1\" UserImpactedIdentityTitle=\"\" UserOriginIdentity=\"\" UserOriginIdentityCompany=\"\" UserOriginIdentityDepartment=\"\" UserOriginIdentityDomain=\"\" UserOriginIdentityID=\"-1\" UserOriginIdentityTitle=\"\" VendorInfo=\"\" VendorMsgID=\"\" Version=\"\" RawLog=\"02 13 2021 23:45:24 xx.xx.xx.xx <SYSD:INFO> Feb 13 23:45:24 euw2-ec2--001 metricbeat[3031]: 2021-02-13T23:45:24.264Z#011ERROR#011[logstash.node_stats]#011node_stats/node_stats.go:73#011error making http request: Get \\\"https://xx.xx.xx.xx:9600/\\\": dial tcp xx.xx.xx.xx:9600: connect: connection refused\"","CollectionSequence":"825328","NormalMsgDate":"2021-02-13T23:45:24.0540000Z"}
I am a little unsure of the best way to achieve this and thought you guys might have some suggestions. I have looked into grok and think this may achieve my objective however I'm unsure where to start.
You can do this with conditionals in your filter section and define the target index according to the type of logs you're parsing.
filter {
... other filters ...
if [MsgSourceTypeName] == "Syslog - Linux Host" {
mutate {
add_field => {
"[#metadata][target_index]" => "logs-syslog"
}
}
}
else if [MsgSourceTypeName] == "MS Windows Event Logging XML" {
mutate {
add_field => {
"[#metadata][target_index]" => "logs-ms_event_log"
}
}
}
}
output {
elasticsearch {
hosts => ["http://xx.xx.xx.xx:xxxx","http://xx.xx.xx.xx:xxxx"]
index => "%{[#metadata][target_index]}"
}
}

Logstash 6.2.4 stuck in infinite retry loop

I am using logstash 6.2.4 with following yml settings -
pipeline.batch.size: 600
pipeline.workers: 1
dead_letter_queue.enable: true
The conf file used to run logstash application is -
input {
file {
path => "/home/administrator/Downloads/postgresql.log.2018-10-17-06"
start_position => "beginning"
}
}
filter {
grok {
match => { "message" => "%{DATESTAMP:timestamp} %{TZ}:%{IP:uip}\(%{NUMBER:num}\):%{WORD:dbuser}%{GREEDYDATA:msg}"}
}
}
output {
stdout { codec => rubydebug }
elasticsearch {
id => 'es-1'
hosts => ["localhost:9200"]
timeout => 60
index => "dlq"
version => "%{[#metadata][version]}"
version_type => "external_gte"
}
}
The input is a normal log file which is formatted using grok filter.
Here the version is always a string rather than a integer and thus elasticsearch throws error 400 Bad Request.
On this error code - logstash should retry a finite number of times and then should push this request payload to dead_letter_queue file (as per the documentation - https://www.elastic.co/guide/en/logstash/current/dead-letter-queues.html), but it gets stuck in an infinite loop with mesaage -
[2018-10-23T12:11:42,475][ERROR][logstash.outputs.elasticsearch] Encountered a retryable error. Will Retry with exponential backoff {:code=>400, :url=>"localhost:9200/_bulk"}
Following are the contents of data/dead_letter_queue/main directory -
1.log (contains a single value "1")
Please assist if any configuration is missing leading to this situation.

logstash import csv file using config file on windows

I'm trying to import a csv file to create data on my elasticsearch server in order to test it.
but I'm blocked to importing data using config file
this is a command (on winodws) logstash -f file.config
this is my config file
input{
file {
path => "‪/E:/Formation/kibana/data/cars.csv"
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
filter{
csv{
separator => ","
columns => ["maker","model","mileage","manufacture_year","engine_displacement",
"engine_power","body_type","color_slug","stk_year","transimission","door_count",
"seat_count","fuel_type","date_created","date_last_seen","price_eur"]
}
mutate {
convert => ["mileage","integer"]
convert => ["price_eur","float"]
convert => ["engine_power","integer"]
convert => ["door_count","integer"]
convert => ["seat_count","integer"]
}
}
output {
elasticsearch {
hosts => "localhost"
index => "cars"
document_type=> "sold_cars"
}
stdout { }
}
and this is the error
UPDATE this is log after using mode --debug thanks for helping
16:49:29.252 [Ruby-0-Thread-11: E:/Formation/kibana/logstash-5.4.0/logstash-core/lib/logstash/pipeline.rb:532] DEBUG logstash.pipeline - Pushing flush onto pipeline
16:49:34.257 [Ruby-0-Thread-11: E:/Formation/kibana/logstash-5.4.0/logstash-core/lib/logstash/pipeline.rb:532] DEBUG logstash.pipeline - Pushing flush onto pipeline
16:49:39.257 [Ruby-0-Thread-11: E:/Formation/kibana/logstash-5.4.0/logstash-core/lib/logstash/pipeline.rb:532] DEBUG logstash.pipeline - Pushing flush onto pipeline
16:49:43.663 [[main]<file] DEBUG logstash.inputs.file - _globbed_files: /e/Formation/kibana/data/cars.csv: glob is: []
On Windows, you should use sincedb_path => "nul" instead of sincedb_path => "/dev/null", which is used on Linux-based operating systems.

Unable to view Apache log in elasticsearch

I have installed ELK stack on windows and configured Logstash to read an Apache Log file. I cant seem to see the output in Elasticsearch. I am very new to ELK stack.
Environment Setup
Elasticsearch: http://localhost:9200/
Logstash :
Kibana : http://localhost:5601/
All 3 applications above are running as a service.
I have created a file called "logstash.conf" to read apache logs in "C:\Elk\logstash\conf\logstash.conf" with the following :
input {
file {
path => "C:\Elk\apache.log"
start_position => "beginning"
}
}
output {
elasticsearch { hosts => ["localhost:9200"] }
}
I then restarted my Logstash service and now wish to see if elasticsearch is indexing the content of my log. How do i go about doing this ?
try adding following lines to your logstash conf and let us know if there are any grokparsing failures...which would mean your pattern used in filter section is not correct..
output {
stdout { codec => json }
file { path => "C:/POC/output3.txt" }
}

Logstash not writing output to elasticsearch

The code mentioned is my logstash conf file . I provide my nginx access log file as input and output to elasticsearch .I also write the output to a text file which works fine .. But the output is never been written to elasticsearch.
input {
file {
path => "filepath"
start_position => "beginning"
}
}
output {
file {
path => "filepath"
}
elasticsearch {
host => localhost
port => "9200"
}
}
I also tried executing logstash binary from command line using -e option
input { stdin{ } output { elasticsearch { host => localhost } }
which works fine. I get the output written to elasticsearch.. But in the former case i dont . Help me solve this
I tried a few things, I have no idea why your case with just host works. If I try it, i get timeouts. This is the configuration that works for me:
elasticsearch {
protocol => "http"
host => "localhost"
port => "9200"
}
I tried with logstash 1.4.2 and elasticsearch 1.4.4

Resources