I'm trying to send logs from specific source to specific index.
So in logstash.conf i did the following:
input {
gelf {
port => 12201
# type => docker
use_tcp => true
tags => ["docker"]
}
filter {
if "test_host" in [_source][host] {
mutate { add_tag => "test_host"}
}
output {
if "test_host" in [tags] {
stdout { }
opensearch {
hosts => ["https://opensearch:9200"]
index => "my_host_index"
user => "administrator"
password => "some_password"
ssl => true
ssl_certificate_verification => false
}
}
But unfortunately it's not working.
What am i doing wrong?
Thanks.
Related
I am getting 3 types of messages in logstash, based on that i need to parse and save in elasticSearch with 3 different index
Message 1 => "2021-05-26T09:55:36.091040+00:00 10.13.14.11 [S=294230650] [ID=fbf282:30:11158988] !!! Repeated 38332 times"
Message 2 => "2021-06-10T09:57:02.237521+00:00 10.13.14.11 [S=21473] |START |ABC |1aa9d286a960501696a33c107b71f21b"
Message 3 => "2021-05-26T10:51:04.139725+00:00 10.10.15.11 [2021-05-26 10:56:05,308] 4066 0002 com.sonus.sbc.sip INFO (TransportLayer.cpp:1073) - DataReadCB: Received
I tried with below logstash configuration, as all messages getting parsed with gork filters but getting stored in "default-%{+YYYY.MM.dd}" index.
Expected Result :-messages should get stored in respective index using "msgType" field.
input {
beats {
port => 5044
}
}
filter {
if "[S=" in [message] and "[ID=" in [message]{
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:timestamp}%{SPACE}%{IP:IP}%{SPACE}\[%{DATA:SeqNo}\]%{SPACE}\[%{DATA:ID}\]%{SPACE}%{GREEDYDATA:message}" }
add_field => {
"msgType" => "message1"
}
}
}
else if "[S=" in [message]{
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:timestamp}%{SPACE}%{IP:IP}%{SPACE}\[%{DATA:SeqNo}\]%{SPACE}%{GREEDYDATA:message}" }
add_field => {
"msgType" => "message2"
}
}
} else {
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:timestamp}%{SPACE}%{IP:IP}%{SPACE}\[%{DATA:anotherDate}\]%{SPACE}%{GREEDYDATA:messageBody}" }
add_field => {
"msgType" => "message3"
}
}
}
}
output {
if [msgType] == "message1" {
elasticsearch {
hosts => "10.133.11.23:9200"
user => "elastic"
password => "changeme"
ecs_compatibility => disabled
index => "message1-%{+YYYY.MM.dd}"
}
}
else if [msgType] == "message2" {
elasticsearch {
hosts => "10.133.11.23:9200"
user => "elastic"
password => "changeme"
ecs_compatibility => disabled
index => "message2-%{+YYYY.MM.dd}"
}
}
else if [msgType] == "message3" {
elasticsearch {
hosts => "10.133.11.23:9200"
user => "elastic"
password => "changeme"
ecs_compatibility => disabled
index => "message3-%{+YYYY.MM.dd}"
}
}
else {
elasticsearch {
hosts => "10.133.11.23:9200"
user => "elastic"
password => "changeme"
ecs_compatibility => disabled
index => "default-%{+YYYY.MM.dd}"
}
}
}
I'm try to using Security function at ELK. My elastic version is 7.5.1
I'm having a problem with the file config. i can't start logstash
1.First, i enable security in elasticsearch.yml by added xpack.security.enabled: true
2.Second, at kibana.yml i edit elasticsearch.username = "elasctic" and elasticsearch.password is my set up password
I start service elasticsearch and kibana.
still here everythings is ok.
3.Then i run my logstash with the conf below:
input {
file {
path => ["/etc/logstash/handleexception1.txt"]
type => "_doc"
start_position => beginning
}
}
filter {
dissect {
mapping => {
"message" => "%{Date} %{Time} %{INFO} %{Service} Message:%{Message} ExceptionList:%{ExceptionList}"
}
}
}
output {
hosts => ["localhost:9200"]
index => "logstashhhandlerror2"
user => "elastic"
pasword => "elastic"
}
stdout { codec => rubydebug}
}
acctually i was try both
input {
elasticsearch{
file {
path => ["/etc/logstash/handleexception1.txt"]
type => "_doc"
start_position => beginning
}
user => "elastic"
password => "elastic"
}
}
filter {
elasticsearch{
dissect {
mapping => {
"message" => "%{Date} %{Time} %{INFO} %{Service} Message:%{Message} ExceptionList:%{ExceptionList}"
}
}
user => "elastic"
password => "elastic"
}
}
output {
hosts => ["localhost:9200"]
index => "logstashhhandlerror2"
user => "elastic"
pasword => "elastic"
}
stdout { codec => rubydebug}
}
Here is the screen when i try to start logtash.service
Thanks for reading and hoping you have ask for my problem.
your point 3 config should be working only you need to make one change for index creation, update output:
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "logstashhhandlerror2"
user => "elastic"
pasword => "elastic"
}
stdout { codec => rubydebug}
}
}
I have the following configuration for logstash.
There are 3 parts to this one is a generallog which we use for all applications they land in here.
second part is the application stats where in which we have a specific logger which will be configured to push the application statistics
third we have is the click stats when ever an event occurs on client side we may want to push it to the logstash on the upd address.
all 3 are udp based, we also use log4net to to send the logs to the logstash.
the base install did not have a GeoIP.dat file so got the file downloaded from the https://dev.maxmind.com/geoip/legacy/geolite/
have put the file in the /opt/logstash/GeoIPDataFile with a 777 permissions on the file and folder.
second thing is i have a country name and i need a way to show how many users form each country are viewing the application in last 24 hours.
so for that reason we also capture the country name as its in their profile in the application.
now i need a way to get the geo co-ordinates to use the tilemap in kibana.
What am i doing wrong.
if i take the geoIP { source -=> "country" section the logstash works fine.
when i check the
/opt/logstash/bin/logstash -t -f /etc/logstash/conf.d/logstash.conf
The configuration file is ok is what i receive. where am i going worng?
Any help would be great.
input {
udp {
port => 5001
type => generallog
}
udp {
port => 5003
type => applicationstats
}
udp {
port => 5002
type => clickstats
}
}
filter {
if [type] == "generallog" {
grok {
remove_field => message
match => { message => "(?m)%{TIMESTAMP_ISO8601:sourcetimestamp} \[%{NUMBER:threadid}\] %{LOGLEVEL:loglevel} +- %{IPORHOST:requesthost} - %{WORD:applicationname} - %{WORD:envname} - %{GREEDYDATA:logmessage}" }
}
if !("_grokparsefailure" in [tags]) {
mutate {
replace => [ "message" , "%{logmessage}" ]
replace => [ "host" , "%{requesthost}" ]
add_tag => "generalLog"
}
}
}
if [type] == "applicationstats" {
grok {
remove_field => message
match => { message => "(?m)%{TIMESTAMP_ISO8601:sourceTimestamp} \[%{NUMBER:threadid}\] %{LOGLEVEL:loglevel} - %{WORD:envName}\|%{IPORHOST:actualHostMachine}\|%{WORD:applicationName}\|%{NUMBER:empId}\|%{WORD:regionCode}\|%{DATA:country}\|%{DATA:applicationName}\|%{NUMBER:staffapplicationId}\|%{WORD:applicationEvent}" }
}
geoip {
source => "country"
target => "geoip"
database => "/opt/logstash/GeoIPDataFile/GeoIP.dat"
add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]
}
mutate {
convert => [ "[geoip][coordinates]", "float"]
}
if !("_grokparsefailure" in [tags]) {
mutate {
add_tag => "applicationstats"
add_tag => [ "eventFor_%{applicationName}" ]
}
}
}
if [type] == "clickstats" {
grok {
remove_field => message
match => { message => "(?m)%{TIMESTAMP_ISO8601:sourceTimestamp} \[%{NUMBER:threadid}\] %{LOGLEVEL:loglevel} - %{IPORHOST:remoteIP}\|%{IPORHOST:fqdnHost}\|%{IPORHOST:actualHostMachine}\|%{WORD:applicationName}\|%{WORD:envName}\|(%{NUMBER:clickId})?\|(%{DATA:clickName})?\|%{DATA:clickEvent}\|%{WORD:domainName}\\%{WORD:userName}" }
}
if !("_grokparsefailure" in [tags]) {
mutate {
add_tag => "clicksStats"
add_tag => [ "eventFor_%{clickName}" ]
}
}
}
}
output {
if [type] == "applicationstats" {
elasticsearch {
hosts => "localhost:9200"
index => "applicationstats-%{+YYYY-MM-dd}"
template => "/opt/logstash/templates/udp-applicationstats.json"
template_name => "applicationstats"
template_overwrite => true
}
}
else if [type] == "clickstats" {
elasticsearch {
hosts => "localhost:9200"
index => "clickstats-%{+YYYY-MM-dd}"
template => "/opt/logstash/templates/udp-clickstats.json"
template_name => "clickstats"
template_overwrite => true
}
}
else if [type] == "generallog" {
elasticsearch {
hosts => "localhost:9200"
index => "generallog-%{+YYYY-MM-dd}"
template => "/opt/logstash/templates/udp-generallog.json"
template_name => "generallog"
template_overwrite => true
}
}
else{
elasticsearch {
hosts => "localhost:9200"
index => "logstash-%{+YYYY-MM-dd}"
}
}
}
As per the error message, the mutation which you're trying to do could be wrong. Could you please change your mutate as below:
mutate {
convert => { "geoip" => "float" }
convert => { "coordinates" => "float" }
}
I guess you've given the mutate as an array, and it's a hash type by origin. Try converting both the values individually. Your database path for geoip seems to be fine in your filter. Is that the whole error which you've mentioned in the question? If not update the question with the whole error if possible.
Refer here, for in depth explanations.
i have duplicate data in Logstash
how could i remove this duplication?
my input is:
input
input {
file {
path => "/var/log/flask/access*"
type => "flask_access"
max_open_files => 409599
}
stdin{}
}
filter
filter of files is :
filter {
mutate { replace => { "type" => "flask_access" } }
grok {
match => { "message" => "%{FLASKACCESS}" }
}
mutate {
add_field => {
"temp" => "%{uniqueid} %{method}"
}
}
if "Entering" in [api_status] {
aggregate {
task_id => "%{temp}"
code => "map['blockedprocess'] = 2"
map_action => "create"
}
}
if "Entering" in [api_status] or "Leaving" in [api_status]{
aggregate {
task_id => "%{temp}"
code => "map['blockedprocess'] -= 1"
map_action => "update"
}
}
if "End Task" in [api_status] {
aggregate {
task_id => "%{temp}"
code => "event['blockedprocess'] = map['blockedprocess']"
map_action => "update"
end_of_task => true
timeout => 120
}
}
}
Take a look at the image, the same data log, at the same time, and I just sent one log request.
i solve it
i create a unique id by ('document_id') in output section
document_id point to my temp and temp is my unique id in my project
my output changed to:
output {
elasticsearch {
hosts => ["localhost:9200"]
document_id => "%{temp}"
# sniffing => true
# manage_template => false
# index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
# document_type => "%{[#metadata][type]}"
}
stdout { codec => rubydebug }
}
Executing tests in my local lab, I've just found out that logstash is sensitive to the number of its config files that are kept in /etc/logstash/conf.d directory.
If config files are more than 1, then we can see duplicates for the same record.
So, try to remove all backup configs from /etc/logstash/conf.d directory and perform logstash restart.
I used the following piece of code to create an index in logstash.conf
output {
stdout {codec => rubydebug}
elasticsearch {
host => "localhost"
protocol => "http"
index => "trial_indexer"
}
}
To create another index i generally replace the index name with another in the above code. Is there any way of creating many indexes in the same file? I'm new to ELK.
You can use a pattern in your index name based on the value of one of your fields. Here we use the value of the type field in order to name the index:
output {
stdout {codec => rubydebug}
elasticsearch {
host => "localhost"
protocol => "http"
index => "%{type}_indexer"
}
}
You can also use several elasticsearch outputs either to the same ES host or to different ES hosts:
output {
stdout {codec => rubydebug}
elasticsearch {
host => "localhost"
protocol => "http"
index => "trial_indexer"
}
elasticsearch {
host => "localhost"
protocol => "http"
index => "movie_indexer"
}
}
Or maybe you want to route your documents to different indices based on some variable:
output {
stdout {codec => rubydebug}
if [type] == "trial" {
elasticsearch {
host => "localhost"
protocol => "http"
index => "trial_indexer"
}
} else {
elasticsearch {
host => "localhost"
protocol => "http"
index => "movie_indexer"
}
}
}
UPDATE
The syntax has changed a little bit in Logstash 2 and 5:
output {
stdout {codec => rubydebug}
if [type] == "trial" {
elasticsearch {
hosts => "localhost:9200"
index => "trial_indexer"
}
} else {
elasticsearch {
hosts => "localhost:9200"
index => "movie_indexer"
}
}
}