Kafka (Confluent Platform) input for Logstash - broken message encoding - jdbc

I have a Confluent Platform (version 4.1.1). It is configured to read data from the database. The configuration for this is:
name = source-mysql-requests
connection.url = jdbc:mysql://localhost:3306/Requests
connector.class = io.confluent.connect.jdbc.JdbcSourceConnector
connection.user = ***
connection.password = ***
mode = incrementing
incrementing.column.name = ID
tasks.max = 5
topic.prefix = requests_
poll.interval.ms = 1000
batch.max.rows = 100
table.poll.interval.ms = 1000
I also have a Logstash (version 6.2.4) for reading the relevant Kafka topic. Here is its configuration:
kafka {
bootstrap_servers => "localhost:9092"
topics => ["requests_Operation"]
add_field => { "[#metadata][flag]" => "operation" }
}
output {
if [#metadata][flag] == "operation" {
stdout {
codec => rubydebug
}
}
}
When I run "kafka-avro-console-consumer" for the test, I get messages of this type:
{"ID":388625154,"ISSUER_ID":"8e427b6b-1176-4d4a-8090-915fedcef870","SERVICE_ID":"mercury-g2b.service:1.4","OPERATION":"prepareOutcomingConsignmentRequest","STATUS":"COMPLETED","RECEIVE_REQUEST_DATE":1525381951000,"PRODUCE_RESULT_DATE":1525381951000}
But in Logstash I have something terrible and unreadable:
"\u0000\u0000\u0000\u0000\u0001����\u0002Hfdebfb95-218a-11e2-a69b-b499babae7ea.mercury-g2b.service:1.4DprepareOutcomingConsignmentRequest\u0012COMPLETED���X���X"
What could go wrong?

You can change Kafka Connect to not use Avro by changing the configurations for value.converter and key.converter to use JSON instead, for example.
Otherwise, you would need Logstash to know how to interpret the Schema Registry encoded Avro data and convert it into a human-readable format.
Alternatively, you could use Connect's Elasticsearch or Console sink and skip Logstash entirely, assuming that is the goal
You can use a Connect SMT to replace the Logstash add_field : operation config as well

Related

Logstash creating pipelines from Kafka not working

I am trying to get data from Kafka topic to run into ELK-stack with Logstash but can't get the data moving.
I edited the logstash.conf to following:
input {
tcp {
port => 5000
}
kafka {
bootstrap_servers => "broker:29092"
topics => ["PLACES_ROWKEY"]
}
}
## Add your filters / logstash plugins configuration here
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
user => "elastic"
password => "changeme"
index => "from_logstash"
}
}
Im running this setup in Docker if it matters (broker is the hostname for the Kafka broker container). I restart the Logstash but cant see any new indices in elasticsearch

Serilog - logstash index

I am sending log messages (UDP) to logstash via serilog.
var logger = new LoggerConfiguration()
.WriteTo.Console()
.WriteTo.UDPSink("host", port)
.MinimumLevel.Is(LogEventLevel.Verbose)
.CreateLogger();
But I would like to specify name on the logstash index. Any idea how?
I Don't know how your logstash config looks like, so I can't give you a full answer.
But, in general your logStash.config file should look like:
input {
udp {
port => ...
id => "my_plugin_id"
}
}
output {
elasticsearch {
host => "127.0.0.1"
index => "%{your_defined_index}"
}
}

sending json from one logstash to another

i have 3 node setup
10.x.x.1 - application and filebeat
10.x.x.2 - machine for parsing and logstash
10.x.x.3 - having centralized logstash node from where we need to push messages into Elastic Search
in 10.x.x.2 when i set the output codec to stdout , i can see the messages coming from 10.x.x.1.
Now, i need to forward all the json messages from 10.x.x.2 to 10.x.x.3 . I tried using TCP. But the messages are not gettting sent.
10.x.x.2 logstash conf file
input {
beats {
port => 5045
}
}
output{
#stdout { codec => rubydebug }
tcp{
host => "10.x.x.3"
port => 3389
}
10.x.x.3 logstash conf file
input{
tcp{
host => "10.x.x.3"
port => 3389
#mode => "server"
#codec => "json"
}
}
output{
stdout{ codec => rubydebug }
}
is there any plugin which can send json data from one logstash to another logstash server
Your config should work.
But you have to be carreful with the "codec" properties.
Try first to set it to "line" on the output AND the input plugins of the two logstash.
And see if log are incoming.
With the codec set to "line" you will have logicly no problem to forward the logs.
Then work on the "json" properties.
Do not forget that you can activate the debug mode of logstash with the argument --debug and you can log with the arguments : -l logFileName
When you start to work with the codec json look for "_jsonparsefailure" tags, which could explain why it do not transfert logs between the two logstash.

Not getting each error email alert from logstash 1.5.4

I have my ELK setup like below:
HOST1: Component(which generates log) + Logstash (To send logs to redis)
HOST2: Redis + Elasticsearch + Logstash ( To parse data based on grok and send it to elasticsearch on same setup)
HOST3: Redis + Elasticsearch + Logstash ( To parse data based on grok and send it to elasticsearch on same setup)
HOST4: nginx + Kibana 4
Now when I send one error log line from logstash to redis, I get double entry in Kibana 4. Like below:
Plus I didnt get any email alert from logstash, although it is configured to send alert when severity == "Erro".
this is part of logstash conf file:
output {
elasticsearch { host => ["<ELK IP>"] port => "9200" protocol => "http" }
if [severity] =~ /Erro/
{
email {
from => "someone#somedomain.com"
subject => "Error Alert"
to => "someone#somedomain.com"
via => "smtp"
htmlbody => "<h2>Error Alert1</h2><br/><br/><div
align='center'>%{message}</div>"
options => [
"smtpIporHost", "smtp.office365.com",
"port", "587",
"domain", "smtp.office365.com",
"userName", "someone#somedomain.com",
"password", "somepasswd",
"authenticationType", "login",
"starttls", "true"
]
}
}
stdout { codec => rubydebug }
}
I am using following custom grok pattern to parse log line:
ABTIMESTAMP %{YEAR}%{MONTHNUM2}%{MONTHDAY} %{USERNAME}
ABLOGLEVEL (Note|Erro|Fatl|Warn|Urgt)
ABLOG %{ABTIMESTAMP:timestamp} %{HOST:hostname} %{WORD:servername} %{INT:pid} %{INT:lwp} %{INT:thread} %{ABLOGLEVEL:severity};%{USERNAME:event}\(%{NUMBER:msgcat}/%{NUMBER:msgnum}\)%{GREEDYDATA:greedydata}
Any help here as, how to get each email alert for every error log line?
Thanks in advance!
resolved it... Actually I was having multiple conf files in logstash/conf.d folder. I removed all unnecessary files and only kept my conf file and now its working. :). Thank you Val for your help

Where does logstash /elasticsearch write data?

In my input section of my logstash config file, I have created a configuration for reading a rabbitMQ queue. Using the RabbitMQ console, I can see logstash drain the queue. However, I have no idea what logstash is doing with the message. Is it discarding it? Is if forwarding it to elasticsearch?
Here's the logstash configuration
input {
rabbitmq {
host => "192.168.34.151"
exchange => an_exchange
key => a_key
queue => a_queue
}
}
output {
elasticsearch {
embedded => true
protocol => http
}
}
edit - removed the bogus comma from the config.

Resources