I have Kafka 0.8.1.2.2 running on a HDP 2.2.4 cluster (3 brokers on 3 ZK nodes - ZK 3.4.6.2.2). All worked well for a couple of days and now my topics seem to have become unreachable by producers and consumer. I am newer to Kafka and am seeking a way to determine what has gone wrong and how to fix it as "just re-installing" will not be an option once in production.
Previously, messages were successfully received by my topics and could then be consumed. Now, even the most basic of operations fails immediately. If I ssh to a broker node and create a new topic:
[root#dev-hdp-0 kafka]# bin/kafka-topics.sh --create --zookeeper 10.0.0.39:2181 --replication-factor 3 --partitions 3 --topic test4
Created topic "test4".
So far so good. Now, we check the description:
[root#dev-hdp-0 kafka]# bin/kafka-topics.sh --describe --zookeeper 10.0.0.39:2181 --topic test4
Topic:test4 PartitionCount:3 ReplicationFactor:3 Configs:
Topic: test4 Partition: 0 Leader: 2 Replicas: 2,1,0 Isr: 2
Topic: test4 Partition: 1 Leader: 0 Replicas: 0,2,1 Isr: 0,2,1
Topic: test4 Partition: 2 Leader: 1 Replicas: 1,0,2 Isr: 1,0,2
OK - now if I create a consumer:
[2015-06-09 08:34:27,458] WARN [console-consumer-45097_dev-hdp-0.cloud.stp-1.sparfu.com-1433856803464-12b54195-leader-finder-thread], Failed to add leader for partitions [test4,0],[test4,2],[test4,1]; will retry (kafka.consumer.ConsumerFetcherManager$LeaderFinderThread)
java.net.ConnectException: Connection timed out
at sun.nio.ch.Net.connect0(Native Method)
at sun.nio.ch.Net.connect(Net.java:465)
at sun.nio.ch.Net.connect(Net.java:457)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:670)
at kafka.network.BlockingChannel.connect(BlockingChannel.scala:57)
at kafka.consumer.SimpleConsumer.connect(SimpleConsumer.scala:44)
at kafka.consumer.SimpleConsumer.getOrMakeConnection(SimpleConsumer.scala:142)
at kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:69)
at kafka.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:124)
at kafka.consumer.SimpleConsumer.earliestOrLatestOffset(SimpleConsumer.scala:157)
at kafka.consumer.ConsumerFetcherThread.handleOffsetOutOfRange(ConsumerFetcherThread.scala:60)
at kafka.server.AbstractFetcherThread$$anonfun$addPartitions$2.apply(AbstractFetcherThread.scala:179)
at kafka.server.AbstractFetcherThread$$anonfun$addPartitions$2.apply(AbstractFetcherThread.scala:174)
at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
at scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
at kafka.server.AbstractFetcherThread.addPartitions(AbstractFetcherThread.scala:174)
at kafka.server.AbstractFetcherManager$$anonfun$addFetcherForPartitions$2.apply(AbstractFetcherManager.scala:86)
at kafka.server.AbstractFetcherManager$$anonfun$addFetcherForPartitions$2.apply(AbstractFetcherManager.scala:76)
at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
at scala.collection.immutable.Map$Map3.foreach(Map.scala:154)
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
at kafka.server.AbstractFetcherManager.addFetcherForPartitions(AbstractFetcherManager.scala:76)
at kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:95)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:51)
[2015-06-09 08:35:30,709] WARN [console-consumer-45097_dev-hdp-0.cloud.stp-1.sparfu.com-1433856803464-12b54195-leader-finder-thread], Failed to add leader for partitions [test4,0],[test4,2],[test4,1]; will retry (kafka.consumer.ConsumerFetcherManager$LeaderFinderThread)
I have been poking around for something related to Failed to add leader for partitions as that seems to be key, but I have yet to find anything specific there that is helpful.
So, if I try using the simple consumer shell for a known partition:
[root#dev-hdp-0 kafka]# bin/kafka-simple-consumer-shell.sh --broker-list 10.0.0.39:6667,10.0.0.45:6667,10.0.0.48:6667 --skip-message-on-error --offset -1 --print-offsets --topic test4 --partition 0
Error: partition 0 does not exist for topic test4
Despite the --describe operation showing clearly that partition 0 very much does exist.
I have a simple Spark application that publishes a small number of messages to a topic, but this too fails to publish (on brand new and older, previously working, topics). A excerpt from the console here also alludes to leader issues:
15/06/08 15:05:35 WARN BrokerPartitionInfo: Error while fetching metadata [{TopicMetadata for topic test8 ->
No partition metadata for topic test8 due to kafka.common.LeaderNotAvailableException}] for topic [test8]: class kafka.common.LeaderNotAvailableException
15/06/08 15:05:35 ERROR DefaultEventHandler: Failed to collate messages by topic, partition due to: Failed to fetch topic metadata for topic: test8
15/06/08 15:05:35 WARN BrokerPartitionInfo: Error while fetching metadata [{TopicMetadata for topic test8 ->
No partition metadata for topic test8 due to kafka.common.LeaderNotAvailableException}] for topic [test8]: class kafka.common.LeaderNotAvailableException
Additionally, if we try the console producer:
[root#dev-hdp-0 kafka]# bin/kafka-console-producer.sh --broker-list 10.0.0.39:6667,10.0.0.45:6667,10.0.0.48:6667 --topic test4
foo
[2015-06-09 08:58:36,456] WARN Error while fetching metadata [{TopicMetadata for topic test4 ->
No partition metadata for topic test4 due to kafka.common.LeaderNotAvailableException}] for topic [test4]: class kafka.common.LeaderNotAvailableException (kafka.producer.BrokerPartitionInfo)
I have scanned the logs under /var/log/kafka and nothing any more descriptive than the console output presents itself. Searching on the various exceptions has yielded little more than others with similar mysterious issues.
That all said is there a way to diagnose properly why my broker set suddenly stopped working when there has been no changes to the environment or configs? Has anyone encounter a similar scenario and found a corrective set of actions?
Some other details:
All nodes are CentOS 6.6 on an OpenStack private cloud
HDP Cluster 2.2.4.2-2 installed and configured using Ambari 2.0.0
Kafka service has been restarted (a few times now...)
Not sure what else might be helpful - let me know if there are other details that could help to shed light on the problem.
Thank you.
Looks like forcibly stopping (kill -9) and restarting kafka did the trick.
Graceful shutdown didn't work.
Looking at the boot scripts, kafka and zookeeper where coming up at the same time (S20kafka, S20zookeeper) - so perhaps that was the initial problem. For now... not going to reboot this thing.
Related
I'm receiving the following exception when trying to integrate kafka with spring boot:
java.lang.IllegalStateException: Topic(s) [pushEvent] is/are not present and missingTopicsFatal is true
Based on this thread I've tried to set the the spring.kafka.listener.missing-topics-fatal property to false. Because I have an jHipster app I added the following configuration into my application.yml:
spring:
kafka:
listener:
missing-topics-fatal: false
Somehow the above config didn't had an effect and I still receive the above exception.
Am I missing something in the yaml config? Do I need to do something additional?
It seems topic you are trying to produce message is not created.
You can solve this problem with one of the following options:
Creating topic manually:
bin/kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 1 --topic test
Enabling auto topic creation with setting auto.create.topics.enable to true in broker configs. (config/server.properties file in broker side)
auto.create.topics.enable: Enable auto creation of topic on the server
With reference to the answer given above (that if the topic is not created) and if you're using zookeeper to manage Kafka then just simply execute the command given below.
kafka-topics --create --topic name_of_topic --zookeeper localhost:2181 --replication-factor 1 --partitions 1
when I run this command from Hive shell in our EMR cluster:
CREATE EXTERNAL TABLE my_db.my_table
(col1 string, ...)
STORED BY 'org.apache.hadoop.hive.dynamodb.DynamoDBStorageHandler'
TBLPROPERTIES (
"dynamodb.table.name" = "table_name",
"dynamodb.column.mapping" = "col1:col1 ... "
);
I get the following error:
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:Got exception: org.apache.hadoop.net.ConnectTimeoutException Call From ip-xx-xx-xx-xxx.ec2.internal/xx.xx.xx.xxx to ip-yy-yy-yy-yyy.ec2.internal:8020 failed on socket timeout exception: org.apache.hadoop.net.ConnectTimeoutException: 20000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=ip-yy-yy-yy-yyy.ec2.internal/yy.yy.yy.yyy:8020]; For more details see: http://wiki.apache.org/hadoop/SocketTimeout)
The EMR cluster is in VPC.
I Tried editing the Inbound/Outbound rules of the security group of the master node, so far with no success.
Thanks, Michael
AWS Support were able to assist me: the problem was that the database location in Glue was pointing to old HDFS address ip-yy-yy-yy-yyy.ec2.internal (different then xx.xx.xx.xxx), according to the master's node of the previous cluster. I changed to location to point to S3 and the problem was resolved.
I am getting this error Failed to lock the state directory: /tmp/kafka-streams/string-monitor/0_1 while creating state store in my kafka streams application. Here is the complete stack trace of the application
[2016-08-30 12:43:09,408] ERROR [StreamThread-1] User provided listener org.apache.kafka.streams.processor.internals.StreamThread$1 for group string-monitor failed on partition assignment (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
org.apache.kafka.streams.errors.ProcessorStateException: Error while creating the state manager
at org.apache.kafka.streams.processor.internals.AbstractTask.<init>(AbstractTask.java:71)
at org.apache.kafka.streams.processor.internals.StreamTask.<init>(StreamTask.java:86)
at org.apache.kafka.streams.processor.internals.StreamThread.createStreamTask(StreamThread.java:550)
at org.apache.kafka.streams.processor.internals.StreamThread.addStreamTasks(StreamThread.java:577)
at org.apache.kafka.streams.processor.internals.StreamThread.access$000(StreamThread.java:68)
at org.apache.kafka.streams.processor.internals.StreamThread$1.onPartitionsAssigned(StreamThread.java:123)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:222)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$1.onSuccess(AbstractCoordinator.java:232)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$1.onSuccess(AbstractCoordinator.java:227)
at org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
at org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
at org.apache.kafka.clients.consumer.internals.RequestFuture$2.onSuccess(RequestFuture.java:182)
at org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
at org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$SyncGroupResponseHandler.handle(AbstractCoordinator.java:436)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$SyncGroupResponseHandler.handle(AbstractCoordinator.java:422)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:679)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:658)
at org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:167)
at org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
at org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.onComplete(ConsumerNetworkClient.java:426)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:278)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:360)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:224)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:192)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:163)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:243)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.ensurePartitionAssignment(ConsumerCoordinator.java:345)
at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:977)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:937)
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:295)
at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:218)
Caused by: java.io.IOException: Failed to lock the state directory: /tmp/kafka-streams/string-monitor/0_1
at org.apache.kafka.streams.processor.internals.ProcessorStateManager.<init>(ProcessorStateManager.java:95)
at org.apache.kafka.streams.processor.internals.AbstractTask.<init>(AbstractTask.java:69)
... 32 more
Exception in thread "StreamThread-1" org.apache.kafka.streams.errors.StreamsException: Failed to rebalance
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:299)
at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:218)
Caused by: org.apache.kafka.streams.errors.ProcessorStateException: Error while creating the state manager
at org.apache.kafka.streams.processor.internals.AbstractTask.<init>(AbstractTask.java:71)
at org.apache.kafka.streams.processor.internals.StreamTask.<init>(StreamTask.java:86)
at org.apache.kafka.streams.processor.internals.StreamThread.createStreamTask(StreamThread.java:550)
at org.apache.kafka.streams.processor.internals.StreamThread.addStreamTasks(StreamThread.java:577)
at org.apache.kafka.streams.processor.internals.StreamThread.access$000(StreamThread.java:68)
at org.apache.kafka.streams.processor.internals.StreamThread$1.onPartitionsAssigned(StreamThread.java:123)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:222)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$1.onSuccess(AbstractCoordinator.java:232)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$1.onSuccess(AbstractCoordinator.java:227)
at org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
at org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
at org.apache.kafka.clients.consumer.internals.RequestFuture$2.onSuccess(RequestFuture.java:182)
at org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
at org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$SyncGroupResponseHandler.handle(AbstractCoordinator.java:436)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$SyncGroupResponseHandler.handle(AbstractCoordinator.java:422)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:679)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:658)
at org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:167)
at org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
at org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.onComplete(ConsumerNetworkClient.java:426)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:278)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:360)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:224)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:192)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:163)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:243)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.ensurePartitionAssignment(ConsumerCoordinator.java:345)
at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:977)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:937)
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:295)
... 1 more
Caused by: java.io.IOException: Failed to lock the state directory: /tmp/kafka-streams/string-monitor/0_1
at org.apache.kafka.streams.processor.internals.ProcessorStateManager.<init>(ProcessorStateManager.java:95)
at org.apache.kafka.streams.processor.internals.AbstractTask.<init>(AbstractTask.java:69)
... 32 more
And I create a state store as below
StateStoreSupplier avgStore = Stores.create("avgStore")
.withKeys(Serdes.String())
.withValues(Serdes.String())
.persistent()
.build();
Any idea of how to fix this?
Did you configure multiple threads within your application instance? If yes, it may be due to a known issue in older versions of Kafka, where the underlying consumer used by Kafka Streams (in the app instance) can take too long to rebalance, causing itself to be kicked out of the consumer group (and hence triggering another consumer group rebalance) while it is still under the first rebalance process.
The following error messages in your stacktrace indicate that you are actually running into the problem I described above:
Exception in thread "StreamThread-1" org.apache.kafka.streams.errors.StreamsException: Failed to rebalance
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:299)
at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:218)
Caused by: org.apache.kafka.streams.errors.ProcessorStateException: Error while creating the state manager
The problem is summarized in this Apache Kafka ticket:
https://issues.apache.org/jira/browse/KAFKA-3758
A recent change to the underlying Kafka consumer client fixes this issue:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-62%3A+Allow+consumer+to+send+heartbeats+from+a+background+thread
However it is not included in an official Kafka release yet and, as of today, is available only in Apache Kafka trunk. If you are able to run your app with Kafka trunk you can verify if this problem has gone away already.
I also saw this problem when the user didn't have permissions to write to the default state.dir
When I changed the following property to a dir w/good permissions everything was fine:
property.put(StreamsConfig.STATE_DIR_CONFIG, "{goodDir}")
This was observed in 0.10.2
Just a note for those who receives the same Failed to lock the state directory exception and uses The Spring for Apache Kafka (spring-kafka) (example from spring doc):
It took me a while to figure out in the source code that Spring starts your built Steams automatically, so there is no need for you to do manually something like:
KafkaStreams streams = new KafkaStreams(builder, config);
streams.start();
I dit it and faced the aforementioned exception because, not surpassingly, there were two Application instances running: one is created by Spring, another by me
When I run the following command with kafka 0.9.0.1, I get these warnings[1]. Can you please tell me what is wrong with my topics? (I'm talking to the kafka broker which runs in ec2)
./kafka-console-consumer.sh --new-consumer --bootstrap-server kafka.xx.com:9092 --topic MY_TOPIC?
[1]
[2016-04-06 10:57:45,839] WARN Error while fetching metadata with correlation id 1 : {MY_TOPIC?=INVALID_TOPIC_EXCEPTION} (org.apache.kafka.clients.NetworkClient)
[2016-04-06 10:57:46,066] WARN Error while fetching metadata with correlation id 3 : {MY_TOPIC?=INVALID_TOPIC_EXCEPTION} (org.apache.kafka.clients.NetworkClient)
[2016-04-06 10:57:46,188] WARN Error while fetching metadata with correlation id 5 : {MY_TOPIC?=INVALID_TOPIC_EXCEPTION} (org.apache.kafka.clients.NetworkClient)
[2016-04-06 10:57:46,311] WARN Error while fetching metadata with correlation id 7 : {MY_TOPIC?=INVALID_TOPIC_EXCEPTION} (org.apache.kafka.clients.NetworkClient)
You topic name is not valid because it has character '?' which is not legalCharacter for topic names.
I got same error. in my case problem was space between comma separated topics in my code:
#source(type='kafka',
topic.list="p1, p2, p3",
partition.no.list='0',
threading.option='single.thread',
group.id="group",
bootstrap.servers='kafka:9092',
#map(type='json')
)
finally find solution:
#source(type='kafka',
topic.list="p1,p2,p3",
partition.no.list='0',
threading.option='single.thread',
group.id="group",
bootstrap.servers='kafka:9092',
#map(type='json')
)
it happens when our producer is not able to produce to the respective address, Kindly check in /kafka/config/server.properties the value of advertised listeners,
if its commented out , there are other issues.
But if its not please put your ip address in place of localhost and then restart both zookeeper and kafka
Try starting the console producer hopefully it will work.
Just in case anyone is having this issue related with a comma " , " and logstash output to kafka or a calculated topic name:
In the topic_id of logstash output to kafka we tried to create the topic_id appending a variable we calculated in the filter.
The problem is that this field was already present in the source document and we later add it "again" in the logstash filter, converting the string field into a hash (array/list).
So as we used in the logstash output
topic_id => ["topicName_%{field}"]
we end up with:
topic_id : "topicName_fieldItem1,FieldItem2"
Which caused the exception in logstash logs
[WARN ][org.apache.kafka.clients.NetworkClient] [Producer clientId=logstash] Error while fetching metadata with correlation id 3605264 : {topicName_fieldItem1,FieldItem2=INVALID_TOPIC_EXCEPTION}
I am new to camus and I want to try and use it we my kafka 0.8
so far i downloaded the source created 2 queue like the example expect
configured the job config file (see below)
and tried to run it on my machine(details below) with this command
$JAVA_HOME/bin/java -cp camus-example-0.1.0-SNAPSHOT.jar com.linkedin.camus.etl.kafka.CamusJob -P /root/Desktop/camus-workspace/camus-master/camus-example/target/camus.properties
the jar contains all the dependencies like the shade file
and I am getting this error:
[EtlInputFormat] - Discrading topic : TestQueue
[EtlInputFormat] - Discrading topic : test
[EtlInputFormat] - Discrading topic : DummyLog2
[EtlInputFormat] - Discrading topic : test3
[EtlInputFormat] - Discrading topic : TwitterQueue
[EtlInputFormat] - Discrading topic : test2
[EtlInputFormat] - Discarding topic (Decoder generation failed) : DummyLog
[CodecPool] - Got brand-new compressor
[JobClient] - Running job: job_local_0001
[JobClient] - map 0% reduce 0%
[JobClient] - Job complete: job_local_0001
[JobClient] - Counters: 0
[CamusJob] - Job finished
when i tried to run it with my intellij-idea editor
i got the some error but found the reason for the error
java.lang.RuntimeException: java.lang.ClassNotFoundException: com.linkedin.batch.etl.kafka.coders.LatestSchemaKafkaAvroMessageDecoder
can some explain to me what i am doing wrong ?
camus config file
# Needed Camus properties, more cleanup to come
# final top-level data output directory, sub-directory will be dynamically created for each topic pulled
etl.destination.path=/root/Desktop/camus-workspace/camus-master/camus-example/target/1
# HDFS location where you want to keep execution files, i.e. offsets, error logs, and count files
etl.execution.base.path=/root/Desktop/camus-workspace/camus-master/camus-example/target/2
# where completed Camus job output directories are kept, usually a sub-dir in the base.path
etl.execution.history.path=/root/Desktop/camus-workspace/camus-master/camus-example/target3
# Kafka-0.8 handles all zookeeper calls
#zookeeper.hosts=localhost:2181
#zookeeper.broker.topics=/brokers/topics
#zookeeper.broker.nodes=/brokers/ids
# Concrete implementation of the Encoder class to use (used by Kafka Audit, and thus optional for now)
#camus.message.encoder.class=com.linkedin.batch.etl.kafka.coders.DummyKafkaMessageEncoder
# Concrete implementation of the Decoder class to use
camus.message.decoder.class=com.linkedin.batch.etl.kafka.coders.LatestSchemaKafkaAvroMessageDecoder
# Used by avro-based Decoders to use as their Schema Registry
kafka.message.coder.schema.registry.class=com.linkedin.camus.example.DummySchemaRegistry
# Used by the committer to arrange .avro files into a partitioned scheme. This will be the default partitioner for all
# topic that do not have a partitioner specified
#etl.partitioner.class=com.linkedin.camus.etl.kafka.coders.DefaultPartitioner
# Partitioners can also be set on a per-topic basis
#etl.partitioner.class.<topic-name>=com.your.custom.CustomPartitioner
# all files in this dir will be added to the distributed cache and placed on the classpath for hadoop tasks
# hdfs.default.classpath.dir=/root/Desktop/camus-workspace/camus-master/camus-example/target
# max hadoop tasks to use, each task can pull multiple topic partitions
mapred.map.tasks=30
# max historical time that will be pulled from each partition based on event timestamp
kafka.max.pull.hrs=1
# events with a timestamp older than this will be discarded.
kafka.max.historical.days=3
# Max minutes for each mapper to pull messages (-1 means no limit)
kafka.max.pull.minutes.per.task=-1
# if whitelist has values, only whitelisted topic are pulled. nothing on the blacklist is pulled
kafka.blacklist.topics=
kafka.whitelist.topics=DummyLog
log4j.configuration=true
# Name of the client as seen by kafka
kafka.client.name=camus
# Fetch Request Parameters
kafka.fetch.buffer.size=
kafka.fetch.request.correlationid=
kafka.fetch.request.max.wait=
kafka.fetch.request.min.bytes=
# Connection parameters.
kafka.brokers=localhost:9092
kafka.timeout.value=
#Stops the mapper from getting inundated with Decoder exceptions for the same topic
#Default value is set to 10
max.decoder.exceptions.to.print=5
#Controls the submitting of counts to Kafka
#Default value set to true
post.tracking.counts.to.kafka=true
log4j.configuration=true
# everything below this point can be ignored for the time being, will provide more documentation down the road
##########################
etl.run.tracking.post=false
kafka.monitor.tier=
etl.counts.path=
kafka.monitor.time.granularity=10
etl.hourly=hourly
etl.daily=daily
etl.ignore.schema.errors=false
# configure output compression for deflate or snappy. Defaults to deflate
etl.output.codec=deflate
etl.deflate.level=6
#etl.output.codec=snappy
etl.default.timezone=America/Los_Angeles
etl.output.file.time.partition.mins=60
etl.keep.count.files=false
etl.execution.history.max.of.quota=.8
mapred.output.compress=true
mapred.map.max.attempts=1
kafka.client.buffer.size=20971520
kafka.client.so.timeout=60000
#zookeeper.session.timeout=
#zookeeper.connection.timeout=
machine details:
hortonworks - hdp 2.0.0.6
with kafka 0.8 beta 1
There is a mistake in package name.
Change
camus.message.decoder.class=com.linkedin.batch.etl.kafka.coders.LatestSchemaKafkaAvroMessageDecoder
to
camus.message.decoder.class=com.linkedin.camus.etl.kafka.coders.LatestSchemaKafkaAvroMessageDecoder
Also you need to specify some Kafka-related properties or comment it (this way Camus will use default values):
# Fetch Request Parameters
# kafka.fetch.buffer.size=
# kafka.fetch.request.correlationid=
# kafka.fetch.request.max.wait=
# kafka.fetch.request.min.bytes=
# Connection parameters.
kafka.brokers=localhost:9092
# kafka.timeout.value=