Logstash API configuration http - elasticsearch

Im trying to create an API connection in Logstash and push the data to elasticsearch.
Both Elasticsearch and Logstash versions are 7.1.0.
Below id the logstash file:
input {
http_poller {
urls => {
test2 => {
method => get
user => "readonly"
password => "mypass#123"
url => "https://nfr.saas.appdynamics.com/controller/rest/applications?output=JSON
headers => {
Accept => "application/json"
}
}
}
request_timeout => 60
# Supports "cron", "every", "at" and "in" schedules by rufus scheduler
schedule => { cron => "* * * * * UTC"}
codec => "json"
# A hash of request metadata info (timing, response headers, etc.) will be sent here
metadata_target => "http_poller_metadata"
}
}
output {
elasticsearch { hosts => ["localhost:9200"] }
stdout { codec => rubydebug }
}
I am receiving an error:
An unexpected error occurred! {:error=>java.nio.file.AccessDeniedException: D:\logstash-7.1.0\logsta
sh-7.1.0\data\.lock, :backtrace=>["sun.nio.fs.WindowsException.translateToIOExce
ption(sun/nio/fs/WindowsException.java:83)", "sun.nio.fs.WindowsException.rethro
wAsIOException(sun/nio/fs/WindowsException.java:97)", "sun.nio.fs.WindowsException
Edit 1: post giving permissions to data folder suggested by #apt-get_install_skill , below is the timeout error i received:
[0] "_http_request_failure"
],
"http_request_failure" => {
"request" => {
"method" => "get",
"url" => "https://nfr.saas.appdynamics.com/controller/rest/applications?output=JSON",
"headers" => {
"Accept" => "application/json"
},
"auth" => {
"user" => "readonly",
"pass" => "mypass#123",
"eager" => true
}
},
"runtime_seconds" => 10.004,
"name" => "test2",
"error" => "connect timed out",
"backtrace" => nil
}
}
Im new to APIs, and I'm not sure how to simply fetch the output from the URL. Could you help me on getting this corrected?
The URL works when i hit it on my browser.

The problem is not the elasticsearch output but more the permissions of the file
D:\logstash-7.1.0\logstash-7.1.0\data\.lock
as stated in the stacktrace:
error=>java.nio.file.AccessDeniedException: D:\logstash-7.1.0\logstash-7.1.0\data\.lock, :backtrace=>["sun.nio.fs.WindowsException.translateToIOExce
ption(sun/nio/fs/WindowsException.java:83)", "sun.nio.fs.WindowsException.rethro
wAsIOException(sun/nio/fs/WindowsException.java:97)", "sun.nio.fs.WindowsException
You need to make sure that the user that executes logstash has the permissions to read and write this file.

Related

JSON parse error, original data now in message fiel {:message=>"incompatible json object type=java

My logstash filter code catches incorrect data, can anyone help me with the correct syntax? it points to an error in the pipeline, no, there are multiple pipelines I'm working on, but I'm also in doubt about the syntax
input {
file {
path => "/var/tmp/wd/accounts/*.json"
codec => "json"
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
filter {
json {
source => "message"
}
}
output {
elasticsearch {
hosts => "http://localhost:9200"
template => "/etc/logstash/templates/accounts-template.json"
template_name =>["accounts-template.json"]
template_overwrite => true
index => "accounts-%{+yyyy.MM.dd}"
user => "user"
password => "password"
}
stdout {codec => rubydebug}
}

Logstash, how to send logs from specific source to specific index

I'm trying to send logs from specific source to specific index.
So in logstash.conf i did the following:
input {
gelf {
port => 12201
# type => docker
use_tcp => true
tags => ["docker"]
}
filter {
if "test_host" in [_source][host] {
mutate { add_tag => "test_host"}
}
output {
if "test_host" in [tags] {
stdout { }
opensearch {
hosts => ["https://opensearch:9200"]
index => "my_host_index"
user => "administrator"
password => "some_password"
ssl => true
ssl_certificate_verification => false
}
}
But unfortunately it's not working.
What am i doing wrong?
Thanks.

How to delete all documents in elasticsearch with logstash from a search

I am using logstash to pass data to elasticsearch and I would like to know how to delete all documents.
I do this to remove those that come with id, but what I need now is to delete all documents that match a fixed value, for example Fixedfield = "Base1" regardless of whether the id that is obtained in jdbc input exists or not.
The idea is to delete all the documents where elasticsearch fixedField = "Base1" exists and insert the new documents that I get from the jdbc input, this way I avoid leaving documents that no longer exist in my source (jdbc input).
A more complete example
My document_id is formed: 001, 002, 003, etc.
My fixed field is made up of "Base1" for the three document_id
Any ideas?
input {
jdbc {
jdbc_driver_library => ""
jdbc_driver_class => "com.microsoft.sqlserver.jdbc.SQLServerDriver"
jdbc_connection_string => "jdbc:sqlserver://xxxxx;databaseName=xxxx;"
statement => "Select * from public.test"
}
}
filter {
if [is_deleted] {
mutate {
add_field => {
"[#metadata][elasticsearch_action]" => "delete"
}
}
mutate {
remove_field => [ "is_deleted","#version","#timestamp" ]
}
} else {
mutate {
add_field => {
"[#metadata][elasticsearch_action]" => "index"
}
}
mutate {
remove_field => [ "is_deleted","#version","#timestamp" ]
}
}
}
output {
elasticsearch {
hosts => "xxxxx"
user => "xxxxx"
password => "xxxxx"
index => "xxxxx"
document_type => "_doc"
document_id => "%{id}"
}
stdout { codec => rubydebug }
}
I finally managed to eliminate, but ..... the problem I have now that apparently when the input starts, it counts the number of records it gets and when it continues towards the output, it eliminates in the first round and in The following n-1 turns the error message is displayed:
[HTTP Output Failure] Encountered non-2xx HTTP code 409
{:response_code=>409,
:url=>"http://localhost:9200/my_index/_delete_by_query",
The other, which I think may be happening is that _delete_by_query is not a bulk bulk deletion, but rather query / delete, which would lead to the query returning n results and therefore trying to delete n times.
Any ideas how I could iterate it once or how to avoid that error?
I clarify that the error is not only displayed once, but the number of documents to be deleted is displayed n-1 times
input {
jdbc {
jdbc_driver_library => ""
jdbc_driver_class => "com.microsoft.sqlserver.jdbc.SQLServerDriver"
jdbc_connection_string => "jdbc:sqlserver://xxxxx;databaseName=xxxx;"
statement => "Select * from public.test"
}
}
output {
stdout { codec => json_lines }
elasticsearch {
hosts => "localhost:9200"
index => "%{[#metadata][miEntidad]}"
document_type => "%{[#metadata][miDocumento]}"
document_id => "%{id}"
}
http {
url => "http://localhost:9200/my_index/_delete_by_query"
http_method => "post"
format => "message"
content_type => "application/json; charset=UTF-8"
message => '{"query": { "term": { "properties.codigo.keyword": "TEX_FOR_SEARCH_AND_DELETE" } }}'
}
}
Finally it worked like this:
output {
http {
url => "http://localhost:9200/%{[#metadata][miEntidad]}/_delete_by_query?conflicts=proceed"
http_method => "post"
format => "message"
content_type => "application/json; charset=UTF-8"
message => '{"query": { "term": { "properties.code.keyword": "%{[properties][code]}" } }}'
}
jdbc {
connection_string => 'xxxxxxxx'
statement => ["UPDATE test SET estate = 'A' WHERE entidad = ? ","%{[#metadata][miEntidad]}"]
}
}

how can I send logs to nginx reverse proxy nginx over logstash?

My question is simple but I could not find a solution here. I have a logstash and Elasticsearch server .Generally I can send logs to elasticsearch by logstash.
Look please below code:logstash.conf file to ship logs to elastic
input {
file {
type => "json"
path => ["C:/logs/*.json"]
start_position => "beginning"
codec => "json"
}
}
filter {
mutate {
remove_field => [ "path" ]
}
}
output {
stdout {
codec => rubydebug
}
elasticsearch {
hosts => [ "localhost:9200" ]
}
}
But I put a nginx between logstash and elasticsearch, I configured that but it is not working. Below error returns.
input {
file {
type => "json"
path => ["C:/logs/*.json"]
start_position => "beginning"
codec => "json"
}
}
filter {
mutate {
remove_field => [ "path" ]
}
ruby {
code => "
require 'base64';
event['password'] = bG9ndXNlcjpidXJnYW5iYW5rXzIwMTc=
"
}
}
output {
stdout {
codec => rubydebug
}
http {
http_method => "post"
url => "http://localhost:8080;"
format => "message"
headers => {"Authorization" => "Basic %{password}"}
content_type => "application/json"
message => '{"whatever": 1 }'
}
}
Error: [2017-09-18T11:07:30,235][ERROR][logstash.outputs.http ] [HTTP Output Failure] Encountered non-2xx HTTP code 401 {:response_code=>401, :url=>"http://localhost:8080;", :event=>2017-09-18T07:05:42.797Z test_y HTTP "EJT\ANYCUSTOMER" "" "GET" "/api/v1.0/xxxxxxx/xxxx" responded 200 in 0.0000 ms, :will_retry=>false}
Second simple solution also below but it is not working also:
input {
file {
type => "json"
path => ["C:/logs/*.json"]
start_position => "beginning"
codec => "json"
}
}
filter {
mutate {
remove_field => [ "path" ]
}
}
output {
stdout {
codec => rubydebug
}
http {
http_method => "post"
url => "http://127.0.0.1:8080"
headers => ["Authorization", "Basic dXNlcjE6JGFwcjEkNGdTY3dwMkckSlBLOXNVRmJGbzZ4UjhnSUVqYXo2Lg=="]
}
}
ERROR : [ERROR][logstash.outputs.http ] [HTTP Output Failure] Encountered non-2xx HTTP code 401 {:response_code=>401, :url=>"http://127.0.0.1:8080",

logstash geoip filter returns _geoip_lookup_failure

i am working on logstash . i have installed successfully logstash-filter-geoip
but when i tried to use this it returns _geoip_lookup_failure
thi is in my logstash.conf file
filter{
geoip {
source => "clientip"
}
}
this is my input for logstash
55.3.244.1 GET /index.html 15824 0.043
it it returns
{
"duration" => "0.043",
"request" => "/index.html",
"#timestamp" => 2017-07-25T14:33:30.495Z,
"method" => "GET",
"bytes" => "15824",
"#version" => "1",
"host" => "DEs-0033",
"client" => "55.3.244.1",
"message" => "55.3.244.1 GET /index.html 15824 0.043",
"tried to use this it returns _geoip_lookup_failuretags" => [
[0] "_geoip_lookup_failure"
]
}
try client instead of clientip.
filter{
geoip {
source => "client"
}
}
The clientip field does not exist in your case. You will have to use client field.
On the other hand, you may check on IP2Location filter plugin tutorial that provide example as what you are doing. For example:
filter {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}"}
}
ip2location {
source => "clientip"
}
}

Resources