We recently had a problem when ES cluster failed. The problem was resolved, but filebeat failed to send new data after the failure.
Here's a portion of the logs - it seems to retry forever but can't send the data:
2019-04-08T11:52:04.182+0300 INFO elasticsearch/client.go:690 Connected to Elasticsearch version 6.4.0
2019-04-08T11:52:04.185+0300 INFO template/load.go:73 Template already exists and will not be overwritten.
2019-04-08T11:52:04.185+0300 INFO [publish] pipeline/retry.go:172 retryer: send unwait-signal to consumer
2019-04-08T11:52:04.185+0300 INFO [publish] pipeline/retry.go:174 done
2019-04-08T11:52:59.058+0300 INFO [publish] pipeline/retry.go:149 retryer: send wait signal to consumer
2019-04-08T11:52:59.058+0300 INFO [publish] pipeline/retry.go:151 done
2019-04-08T11:53:00.065+0300 ERROR pipeline/output.go:92 Failed to publish events: temporary bulk send failure
2019-04-08T11:53:00.065+0300 INFO [publish] pipeline/retry.go:172 retryer: send unwait-signal to consumer
2019-04-08T11:53:00.065+0300 INFO [publish] pipeline/retry.go:174 done
2019-04-08T11:53:00.065+0300 INFO [publish] pipeline/retry.go:149 retryer: send wait signal to consumer
2019-04-08T11:53:00.065+0300 INFO [publish] pipeline/retry.go:151 done
I restarted Filebeat service and all data was sent to ES without any problem.
Is this a known issue? Filebeat version is quite old, should I update?
I'm running Filebeat 6.3.0 as a service on Windows. Elasticsearch version is 6.4.0.
Please show your profile
I have encountered this error before because I did not write the procotol
Below is a correct configuration file
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/dmesg
- /var/log/syslog
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["http://192.168.13.173:30014"]
description : https://www.elastic.co/guide/en/beats/filebeat/current/elasticsearch-output.html
Related
I have installed Filebeat-oss 7.12.0 and opensearch-2.4.0 and opensearchDashboard-2.4.0 on Windows.
Every service is working fine.
But index is not getting created in Opensearch dashboard.
There is no error.
Logs are:
INFO log/harvester.go:302 Harvester started for file: D:\data\logs.txt
2022-12-08T18:28:17.584+0530 INFO [crawler] beater/crawler.go:141 Starting input (ID: 16780016071726099597)
2022-12-08T18:28:17.585+0530 INFO [crawler] beater/crawler.go:108 Loading and starting Inputs completed. Enabled inputs: 2
2022-12-08T18:28:17.585+0530 INFO cfgfile/reload.go:164 Config reloader started
2022-12-08T18:28:17.584+0530 INFO [input.filestream] compat/compat.go:111 Input filestream starting
2022-12-08T18:28:17.585+0530 INFO cfgfile/reload.go:224 Loading of config files completed.
2022-12-08T18:28:20.428+0530 INFO [add_cloud_metadata] add_cloud_metadata/add_cloud_metadata.go:101 add_cloud_metadata: hosting provider type not detected.
2022-12-08T18:28:21.428+0530 INFO [publisher_pipeline_output] pipeline/output.go:143 Connecting to backoff(elasticsearch(http://localhost:9200))
2022-12-08T18:28:21.428+0530 INFO [publisher] pipeline/retry.go:219 retryer: send unwait signal to consumer
2022-12-08T18:28:21.428+0530 INFO [publisher] pipeline/retry.go:223 done
2022-12-08T18:28:21.433+0530 INFO [esclientleg] eslegclient/connection.go:314 Attempting to connect to Elasticsearch version 2.4.0
2022-12-08T18:28:21.537+0530 INFO [esclientleg] eslegclient/connection.go:314 Attempting to connect to Elasticsearch version 2.4.0
2022-12-08T18:28:21.620+0530 INFO template/load.go:117 Try loading template filebeat-7.12.0 to Elasticsearch
filebeat.yml is:
filebeat.inputs:
- type: log
paths:
- D:\data\*
- type: filestream
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
- D:\data\*
# ============================== Filebeat modules ==============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
# ======================= Elasticsearch template setting =======================
setup.template.settings:
index.number_of_shards: 1
#============================== Kibana =====================================
setup.kibana:
host: "localhost:5601"
#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["localhost:9200"]
#================================ Processors =====================================
# Configure processors to enhance or manipulate events generated by the beat.
processors:
- add_host_metadata:
when.not.contains.tags: forwarded
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
I don't know what the problem is. No index is created in Opensearch dashboard with name filebeat-7.12.0.
#Android see my reply on this thread: https://stackoverflow.com/a/74984260/6101900.
You cannot forward events from filebeat to opensearch since its not elasticsearch.
I got error Failed to instantiate Partitioner class when I try to create s3 source connector. What was done:
Installed confluent-hub and confluentinc/kafka-connect-s3-source, CLASSHPATH was exported. (1.0.1 is latest version)
$ confluent-hub install --no-prompt confluentinc/kafka-connect-s3-source:1.0.1
$ export CLASSPATH=/connector/share/confluent-hub-components/confluentinc-kafka-connect-s3-source/lib/*
Connectors settings are default from documentation (connector.properties)
name=s3-source
tasks.max=1
connector.class=io.confluent.connect.s3.source.S3SourceConnector
s3.bucket.name=confluent-kafka-connect-s3-testing
format.class=io.confluent.connect.s3.format.avro.AvroFormat
confluent.license=
confluent.topic.bootstrap.servers=kafka:9092
confluent.topic.replication.factor=1
transforms=AddPrefix
transforms.AddPrefix.type=org.apache.kafka.connect.transforms.RegexRouter
transforms.AddPrefix.regex=.*
transforms.AddPrefix.replacement=copy_of_$0
Detailed error
$ connect-standalone.sh worker.properties connector.properties
[2019-10-16 12:36:02,410] INFO Kafka version: 2.3.0 (org.apache.kafka.common.utils.AppInfoParser:117)
[2019-10-16 12:36:02,411] INFO Kafka commitId: fc1aaa116b661c8a (org.apache.kafka.common.utils.AppInfoParser:118)
[2019-10-16 12:36:02,412] INFO Kafka startTimeMs: 1571229362410 (org.apache.kafka.common.utils.AppInfoParser:119)
[2019-10-16 12:36:02,675] INFO License for single cluster, single node (io.confluent.license.LicenseManager:417)
[2019-10-16 12:36:02,683] INFO Closing License Store (io.confluent.license.LicenseStore:197)
[2019-10-16 12:36:02,683] INFO Stopping KafkaBasedLog for topic _confluent-command (org.apache.kafka.connect.util.KafkaBasedLog:164)
[2019-10-16 12:36:02,686] INFO [Producer clientId=s3-source-license-manager] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms. (org.apache.kafka.clients.producer.KafkaProducer:1153)
[2019-10-16 12:36:02,701] INFO Stopped KafkaBasedLog for topic _confluent-command (org.apache.kafka.connect.util.KafkaBasedLog:190)
[2019-10-16 12:36:02,702] INFO Closed License Store (io.confluent.license.LicenseStore:199)
[2019-10-16 12:36:02,704] ERROR WorkerConnector{id=s3-source} Error while starting connector (org.apache.kafka.connect.runtime.WorkerConnector:119)
org.apache.kafka.connect.errors.ConnectException: Failed to instantiate Partitioner class
at io.confluent.connect.s3.source.S3SourceConnectorConfig.getPartitioner(S3SourceConnectorConfig.java:612)
at io.confluent.connect.s3.source.S3SourceConnector.doStart(S3SourceConnector.java:94)
at io.confluent.connect.s3.source.S3SourceConnector.start(S3SourceConnector.java:86)
at org.apache.kafka.connect.runtime.WorkerConnector.doStart(WorkerConnector.java:111)
at org.apache.kafka.connect.runtime.WorkerConnector.start(WorkerConnector.java:136)
at org.apache.kafka.connect.runtime.WorkerConnector.transitionTo(WorkerConnector.java:196)
at org.apache.kafka.connect.runtime.Worker.startConnector(Worker.java:252)
at org.apache.kafka.connect.runtime.standalone.StandaloneHerder.startConnector(StandaloneHerder.java:293)
at org.apache.kafka.connect.runtime.standalone.StandaloneHerder.putConnectorConfig(StandaloneHerder.java:209)
at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:115)
I am unfamiliar with Java but now I trying to look into source code inside jars. Any help would be helpful and thanks in advance.
It sounds like your installation is a bit screwy. To run Kafka Connect under Docker you should use a dedicated image such as confluentinc/cp-kafka-connect.
To see an example of Kafka Connect deployed with Docker have a look at http://rmoff.dev/bbuzz19_demo-code and accompanying talk and slides
Facing problem with staring up the Filebeat in windows 10, i have modified the filebeat prospector log path with elasticsearch log folder located in my local machine "E:" drive also i have validated the format of filebeat.yml after made the correction but still am getting below error on start up.
Filebeat version : 6.2.3
Windows version: 64 bit
Filebeat.yml (validated yml format)
filebeat.prospectors:
-
type: log
enabled: true
paths:
- 'E:\Research\ELK\elasticsearch-6.2.3\logs\*.log'
filebeat.config.modules:
path: '${path.config}/modules.d/*.yml'
reload.enabled: false
setup.template.settings:
index.number_of_shards: 3
setup.kibana:
host: 'localhost:5601'
output.elasticsearch:
hosts:
- 'localhost:9200'
username: elastic
password: elastic
Filebeat Startup Log:
E:\Research\ELK\filebeat-6.2.3-windows-x86_64>filebeat --setup -e
2018-03-24T22:58:39.660+0530 INFO instance/beat.go:468 Home path: [E:\Research\ELK\filebeat-6.2.3-windows-x86_64] Config path: [E:\Research\ELK\filebeat-6.2.3-windows-x86_64] Data path: [E:\Research\ELK\filebeat-6.2.3-windows-x86_64\data] Logs path: [E:\Research\ELK\filebeat-6.2.3-windows-x86_64\logs]
2018-03-24T22:58:39.661+0530 INFO instance/beat.go:475 Beat UUID: f818bcc0-25bb-4545-bcd4-3523366a4c0e
2018-03-24T22:58:39.662+0530 INFO instance/beat.go:213 Setup Beat: filebeat; Version: 6.2.3
2018-03-24T22:58:39.662+0530 INFO elasticsearch/client.go:145 Elasticsearch url: http://localhost:9200
2018-03-24T22:58:39.665+0530 INFO pipeline/module.go:76 Beat name: DESKTOP-J932HJH
2018-03-24T22:58:39.666+0530 INFO [monitoring] log/log.go:97 Starting metrics logging every 30s
2018-03-24T22:58:39.666+0530 INFO elasticsearch/client.go:145 Elasticsearch url: http://localhost:9200
2018-03-24T22:58:39.672+0530 INFO elasticsearch/client.go:690 Connected to Elasticsearch version 6.2.3
2018-03-24T22:58:39.672+0530 INFO kibana/client.go:69 Kibana url: http://localhost:5601
2018-03-24T22:59:08.882+0530 INFO instance/beat.go:583 Kibana dashboards successfully loaded.
2018-03-24T22:59:08.882+0530 INFO elasticsearch/client.go:145 Elasticsearch url: http://localhost:9200
2018-03-24T22:59:08.885+0530 INFO elasticsearch/client.go:690 Connected to Elasticsearch version 6.2.3
2018-03-24T22:59:08.888+0530 INFO instance/beat.go:301 filebeat start running.
2018-03-24T22:59:08.888+0530 INFO registrar/registrar.go:108 Loading registrar data from E:\Research\ELK\filebeat-6.2.3-windows-x86_64\data\registry
2018-03-24T22:59:08.888+0530 INFO registrar/registrar.go:119 States Loaded from registrar: 5
2018-03-24T22:59:08.888+0530 INFO crawler/crawler.go:48 Loading Prospectors: 1
2018-03-24T22:59:08.889+0530 INFO log/prospector.go:111 Configured paths: [E:\Research\ELK\elasticsearch-6.2.3\logs\*.log]
2018-03-24T22:59:08.890+0530 INFO log/harvester.go:216 Harvester started for file: E:\Research\ELK\elasticsearch-6.2.3\logs\elasticsearch.log
2018-03-24T22:59:08.892+0530 ERROR fileset/factory.go:69 Error creating prospector: No paths were defined for prospector accessing config
2018-03-24T22:59:08.892+0530 INFO crawler/crawler.go:109 Stopping Crawler
2018-03-24T22:59:08.893+0530 INFO crawler/crawler.go:119 Stopping 1 prospectors
2018-03-24T22:59:08.897+0530 INFO log/prospector.go:410 Scan aborted because prospector stopped.
2018-03-24T22:59:08.897+0530 INFO log/harvester.go:216 Harvester started for file: E:\Research\ELK\elasticsearch-6.2.3\logs\elasticsearch_deprecation.log
2018-03-24T22:59:08.897+0530 INFO prospector/prospector.go:121 Prospector ticker stopped
2018-03-24T22:59:08.898+0530 INFO prospector/prospector.go:138 Stopping Prospector: 18361622063543553778
2018-03-24T22:59:08.898+0530 INFO log/harvester.go:237 Reader was closed: E:\Research\ELK\elasticsearch-6.2.3\logs\elasticsearch.log. Closing.
2018-03-24T22:59:08.898+0530 INFO crawler/crawler.go:135 Crawler stopped
2018-03-24T22:59:08.899+0530 INFO registrar/registrar.go:210 Stopping Registrar
2018-03-24T22:59:08.908+0530 INFO registrar/registrar.go:165 Ending Registrar
2018-03-24T22:59:08.910+0530 INFO instance/beat.go:308 filebeat stopped.
2018-03-24T22:59:08.948+0530 INFO [monitoring] log/log.go:132 Total non-zero metrics
2018-03-24T22:59:08.948+0530 INFO [monitoring] log/log.go:133 Uptime: 29.3387858s
2018-03-24T22:59:08.949+0530 INFO [monitoring] log/log.go:110 Stopping metrics logging.
2018-03-24T22:59:08.950+0530 ERROR instance/beat.go:667 Exiting: No paths were defined for prospector accessing config
Exiting: No paths were defined for prospector accessing config
Check this path ${path.config}/modules.d/
or check by command line "filebeat.exe modules list", if some modules are active, which do not work with windows.
For instance the system.yml (module) does not run on plain windows, because there is no syslog. But the system module is active by default. So you have to disable it first.
If I have it enabled, I run in the exactly the same error message, and filebeat stops.
Rewrite the first part of the yml using this format:
filebeat.prospectors:
- type: log
enabled: true
paths:
- /var/log/*.log
#- c:\programdata\elasticsearch\logs\*
Remove also the empty new line and take attention to the indentation.
I understand that this topic is a bit old however looking at the amount of views that this has received at the time of posting this (June 2019), I think it would be safe to add more informations as this is fairly frustrating to get while very easy to fix.
Before I explain what I did, allow me to say I had this problem on a Linux system but the problem/solution should be the same on all plateforms.
After having updated the logback-spring.xml and restarted the service, it kept refusing spitting back the following error:
ERROR instance/beat.go:824 Exiting: Can only start an input when all related states are finished: {Id:163850-64780 Finished:false Fileinfo:0xc42016c1a0 Source:/some/path/here/error.log Offset:0 Timestamp:2019-06-13 09:15:35.481163602 -0400 EDT m=+0.107516982 TTL:-1ns Type:log Meta:map[] FileStateOS:163850-64780}
My solution was simply to edit the /etc/filebeat/filebeat.yml and comment as much stuff as I could (Going back to nearly a vanilla/basic configuration).
After having done so, restarting filebeat worked and this ended up being a duplicate path entry with another file somewhere in the system, possibly under the modules.
I'm currently working on a PoC ELK installation and I'd like to re-send every log line of a file which is registered in Filebeat for testing purposes.
This is what I do:
I stop Filebeat
I delete the index in Logstash through Kibana
I delete the Filebeat registry file
I start Filebeat
In Kibana I can see that twice as many events are there as log lines, and I can also see that every event is duplicated once.
Why is that?
Filebeat logs:
2017-05-05T14:25:16+02:00 INFO Setup Beat: filebeat; Version: 5.2.2
2017-05-05T14:25:16+02:00 INFO Max Retries set to: 3
2017-05-05T14:25:16+02:00 INFO Activated logstash as output plugin.
2017-05-05T14:25:16+02:00 INFO Publisher name: anonymized
2017-05-05T14:25:16+02:00 INFO Flush Interval set to: 1s
2017-05-05T14:25:16+02:00 INFO Max Bulk Size set to: 2048
2017-05-05T14:25:16+02:00 INFO filebeat start running.
2017-05-05T14:25:16+02:00 INFO No registry file found under: /var/lib/filebeat/registry. Creating a new registry file.
2017-05-05T14:25:16+02:00 INFO Loading registrar data from /var/lib/filebeat/registry
2017-05-05T14:25:16+02:00 INFO States Loaded from registrar: 0
2017-05-05T14:25:16+02:00 INFO Loading Prospectors: 1
2017-05-05T14:25:16+02:00 INFO Prospector with previous states loaded: 0
2017-05-05T14:25:16+02:00 INFO Loading Prospectors completed. Number of prospectors: 1
2017-05-05T14:25:16+02:00 INFO All prospectors are initialised and running with 0 states to persist
2017-05-05T14:25:16+02:00 INFO Starting Registrar
2017-05-05T14:25:16+02:00 INFO Start sending events to output
2017-05-05T14:25:16+02:00 INFO Starting spooler: spool_size: 2048; idle_timeout: 5s
2017-05-05T14:25:16+02:00 INFO Starting prospector of type: log
2017-05-05T14:25:16+02:00 INFO Harvester started for file: /some/where/anonymized.log
2017-05-05T14:25:46+02:00 INFO Non-zero metrics in the last 30s: registrar.writes=2 libbeat.logstash.publish.read_bytes=54 libbeat.logstash.publish.write_bytes=32390 libbeat.logstash.published_and_acked_events=578 filebeat.harvester.running=1 registar.states.current=1 libbeat.logstash.call_count.PublishEvents=1 libbeat.publisher.published_events=578 publish.events=579 filebeat.harvester.started=1 registrar.states.update=579 filebeat.harvester.open_files=1
2017-05-05T14:26:16+02:00 INFO No non-zero metrics in the last 30s
Deleting the registry file created the problem.
Filebeat management the state of a file and the ACK of the event with the prospector(in memory) and with the Registry File(persisted in disk).
Please read the documentation Here
You can management the _id field of each event by yourself, so that any event that is duplicated (for any reason, even in production environment) will not have two of them in elasticsearch, but will update the event.
Create the following configuration in your logstash pipeline config file.
#if your logs don't have a unique ID, use the following to generate one
fingerprint{
#with the message field or choose other(s) that can give you a uniqueID
source => ["message"]
target => "LogID"
key => "something"
method => "MD5"
concatenate_sources => true
}
#in your output section
elasticsearch{
hosts => ["localhost:9200"]
document_id => "%{LogID}"
index => "yourindex"
}
I am trying to run simple kafka producer consumer example on HDP but facing below exception.
[2016-03-03 18:26:38,683] WARN Fetching topic metadata with correlation id 0 for topics [Set(page_visits)] from broker [BrokerEndPoint(0,sandbox.hortonworks.com,9092)] failed (kafka.client.ClientUtils$)
java.nio.channels.ClosedChannelException
at kafka.network.BlockingChannel.send(BlockingChannel.scala:120)
at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:75)
at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:74)
at kafka.producer.SyncProducer.send(SyncProducer.scala:115)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:59)
at kafka.producer.BrokerPartitionInfo.updateInfo(BrokerPartitionInfo.scala:82)
at kafka.producer.async.DefaultEventHandler$$anonfun$handle$1.apply$mcV$sp(DefaultEventHandler.scala:68)
at kafka.utils.CoreUtils$.swallow(CoreUtils.scala:89)
at kafka.utils.Logging$class.swallowError(Logging.scala:106)
at kafka.utils.CoreUtils$.swallowError(CoreUtils.scala:51)
at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:68)
at kafka.producer.async.ProducerSendThread.tryToHandle(ProducerSendThread.scala:105)
at kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:88)
at kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:68)
at scala.collection.immutable.Stream.foreach(Stream.scala:547)
at kafka.producer.async.ProducerSendThread.processEvents(ProducerSendThread.scala:67)
at kafka.producer.async.ProducerSendThread.run(ProducerSendThread.scala:45)
[2016-03-03 18:26:38,688] ERROR fetching topic metadata for topics [Set(page_visits)] from broker [ArrayBuffer(BrokerEndPoint(0,sandbox.hortonworks.com,9092))] failed (kafka.utils.CoreUtils$)
kafka.common.KafkaException: fetching topic metadata for topics [Set(page_visits)] from broker [ArrayBuffer(BrokerEndPoint(0,sandbox.hortonworks.com,9092))] failed
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:73)
at kafka.producer.BrokerPartitionInfo.updateInfo(BrokerPartitionInfo.scala:82)
at kafka.producer.async.DefaultEventHandler$$anonfun$handle$1.apply$mcV$sp(DefaultEventHandler.scala:68)
at kafka.utils.CoreUtils$.swallow(CoreUtils.scala:89)
at kafka.utils.Logging$class.swallowError(Logging.scala:106)
at kafka.utils.CoreUtils$.swallowError(CoreUtils.scala:51)
at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:68)
at kafka.producer.async.ProducerSendThread.tryToHandle(ProducerSendThread.scala:105)
at kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:88)
at kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:68)
at scala.collection.immutable.Stream.foreach(Stream.scala:547)
at kafka.producer.async.ProducerSendThread.processEvents(ProducerSendThread.scala:67)
at kafka.producer.async.ProducerSendThread.run(ProducerSendThread.scala:45)
Caused by: java.nio.channels.ClosedChannelException
at kafka.network.BlockingChannel.send(BlockingChannel.scala:120)
at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:75)
at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:74)
at kafka.producer.SyncProducer.send(SyncProducer.scala:115)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:59)
... 12 more
[2016-03-03 18:26:38,693] WARN Fetching topic metadata with correlation id 1 for topics [Set(page_visits)] from broker [BrokerEndPoint(0,sandbox.hortonworks.com,9092)] failed (kafka.client.ClientUtils$)
java.nio.channels.ClosedChannelException
Here is command that I am using for producer.
./kafka-console-producer.sh --broker-list sandbox.hortonworks.com:9092 --topic page_visits
After doing bit of googling , I found that I need to add advertised.host.name property in server.properties file .
Here is my server.properties file.
# Generated by Apache Ambari. Thu Mar 3 18:12:50 2016
advertised.host.name=sandbox.hortonworks.com
auto.create.topics.enable=true
auto.leader.rebalance.enable=true
broker.id=0
compression.type=producer
controlled.shutdown.enable=true
controlled.shutdown.max.retries=3
controlled.shutdown.retry.backoff.ms=5000
controller.message.queue.size=10
controller.socket.timeout.ms=30000
default.replication.factor=1
delete.topic.enable=false
fetch.purgatory.purge.interval.requests=10000
host.name=sandbox.hortonworks.com
kafka.ganglia.metrics.group=kafka
kafka.ganglia.metrics.host=localhost
kafka.ganglia.metrics.port=8671
kafka.ganglia.metrics.reporter.enabled=true
kafka.metrics.reporters=org.apache.hadoop.metrics2.sink.kafka.KafkaTimelineMetricsReporter
kafka.timeline.metrics.host=sandbox.hortonworks.com
kafka.timeline.metrics.maxRowCacheSize=10000
kafka.timeline.metrics.port=6188
kafka.timeline.metrics.reporter.enabled=true
kafka.timeline.metrics.reporter.sendInterval=5900
leader.imbalance.check.interval.seconds=300
leader.imbalance.per.broker.percentage=10
listeners=PLAINTEXT://sandbox.hortonworks.com:6667
log.cleanup.interval.mins=10
log.dirs=/kafka-logs
log.index.interval.bytes=4096
log.index.size.max.bytes=10485760
log.retention.bytes=-1
log.retention.hours=168
log.roll.hours=168
log.segment.bytes=1073741824
message.max.bytes=1000000
min.insync.replicas=1
num.io.threads=8
num.network.threads=3
num.partitions=1
num.recovery.threads.per.data.dir=1
num.replica.fetchers=1
offset.metadata.max.bytes=4096
offsets.commit.required.acks=-1
offsets.commit.timeout.ms=5000
offsets.load.buffer.size=5242880
offsets.retention.check.interval.ms=600000
offsets.retention.minutes=86400000
offsets.topic.compression.codec=0
offsets.topic.num.partitions=50
offsets.topic.replication.factor=3
offsets.topic.segment.bytes=104857600
producer.purgatory.purge.interval.requests=10000
queued.max.requests=500
replica.fetch.max.bytes=1048576
replica.fetch.min.bytes=1
replica.fetch.wait.max.ms=500
replica.high.watermark.checkpoint.interval.ms=5000
replica.lag.max.messages=4000
replica.lag.time.max.ms=10000
replica.socket.receive.buffer.bytes=65536
replica.socket.timeout.ms=30000
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
socket.send.buffer.bytes=102400
zookeeper.connect=sandbox.hortonworks.com:2181
zookeeper.connection.timeout.ms=15000
zookeeper.session.timeout.ms=30000
zookeeper.sync.time.ms=2000
After adding property i am getting same exception.
Any suggestion.
I had similar problem. First I have checked listeners property for Kafka broker in the Ambari
Also possible to check with:
[root#sandbox bin]# cat /usr/hdp/current/kafka-broker/conf/server.properties | grep listeners
listeners=PLAINTEXT://sandbox.hortonworks.com:6667
Ambari replaces localhost with hostname as you can see and the port is same - 6667.
Then I checked that broker really listens on that port:
[root#sandbox bin]# netstat -tulpn | grep 6667
tcp 0 0 10.0.2.15:6667 0.0.0.0:* LISTEN 11137/java
Next step was to launch producer:
./kafka-console-producer.sh --broker-list 10.0.2.15:6667 --topic test
At last I have launched consumer:
./kafka-console-consumer.sh --zookeeper 10.0.2.15:2181 --topic test --from-beginning
After typing few words with hitting Enter on producer side, consumer received messages.
As per the log it seems the kafka server(broker) is not running. The broker server should run first.
Producers and consumers are client programs that will interact with the broker servers and zookeeper also.
Before running the producer or consumer please check whether broker and zookeeper are running successfully or not.
Run the server
./kafka-server-start.sh ../config/server.properties
check the logs for any errors, if no errors then start producing the messages to the server.
Check the zookeeper service also.
modified the file /usr/hdp/current/kafka-broker/config/server.properties with the following 2 lines
advertised.host.name=sandbox.hortonworks.com
listeners=PLAINTEXT://sandbox.hortonworks.com:6667,PLAINTEXT://0.0.0.0:6667
run the following execution commands
./kafka-console-producer.sh --broker-list sandbox.hortonworks.com:6667 --topic tst2
./kafka-console-consumer.sh --zookeeper localhost:2181 --topic tst2 --from-beginning
with this its working fine