kafka streams logging disable INFO - spring-boot

Is there any way to disable Kafka streams processing summary info? because it will take alot of disk space
e.g INFO 21284 --- [-StreamThread-6] o.a.k.s.p.internals.StreamThread : stream-thread [test-20-37836474-d182-4066-a5f5-25b211e2fbdb-StreamThread-1] Processed 0 total records, ran 0 punctuators, and committed 0 total tasks since the last update

Related

Invalid state: The Flow Controller is initializing the Data Flow

I'm trying out a test scenario to add a new node to the already existing cluster (for now 1-node) using external zookeeper.
I'm constantly getting the below repeated lines, and on UI "Invalid state: The Flow Controller is initializing the Data Flow."
2022-02-28 17:51:29,668 INFO [main] o.a.n.c.c.n.LeaderElectionNodeProtocolSender Determined that Cluster Coordinator is located at nifi-02:9489; will use this address for sending heartbeat messages
2022-02-28 17:51:29,668 INFO [main] o.a.n.c.p.AbstractNodeProtocolSender Cluster Coordinator is located at nifi-02:9489. Will send Cluster Connection Request to this address
2022-02-28 17:51:37,572 INFO [Cleanup Archive for default] o.a.n.c.repository.FileSystemRepository Successfully deleted 0 files (0 bytes) from archive
2022-02-28 17:52:36,914 INFO [Write-Ahead Local State Provider Maintenance] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog#13c90c06 checkpointed with 1 Records and 0 Swap Files in 4 milliseconds (Stop-the-world time = 1 milliseconds, Clear Edit Logs time = 1 millis), max Transaction ID 1
2022-02-28 17:52:37,581 INFO [Cleanup Archive for default] o.a.n.c.repository.FileSystemRepository Successfully deleted 0 files (0 bytes) from archive
NiFi-1.15.3 is being used (unsecure setup)
It seems that cluster coordinator is not running on mentioned port for node already in cluster. This I thought from timeout prospective, but new node is able to detect that a cluster coordinator is present at the mentioned node. (How to solve this?)
nc (netcat) is also timing out for the same port

How to stop Preparing to rebalance group with old generation in Kafka?

I used Kafka for my web application and I found the below messages in kafka.log :
[2021-07-06 08:49:03,658] INFO [GroupCoordinator 0]: Preparing to rebalance group qpcengine-group in state PreparingRebalance with old generation 105 (__consumer_offsets-28) (reason: removing member consumer-1-7eafeb56-e6fe-4161-9c88-e69c06a0ab37 on heartbeat expiration) (kafka.coordinator.group.GroupCoordinator)
[2021-07-06 08:49:03,658] INFO [GroupCoordinator 0]: Group qpcengine-group with generation 106 is now empty (__consumer_offsets-28) (kafka.coordinator.group.GroupCoordinator)
But, kafka like as looping forever for one consumer.
How can I stop it?
Here the picture of the kafka log :
enter image description here
If you only have one partition,you dont'need to use consumer_group
just try to use Assign(not subscribe)

kafka-streams instance on startup continuously logs "Found no committed offset for partition traces-1"

I have a kafka-streams app with 2 instances. This is a brand new kafka-cluster with all topics created and have no messages written to them yet.
I start the first instance and see that it has transitioned from REBALANCING to RUNNING state
Now I start the next instance and notice that it continuously logs the following:
2020-01-14 18:03:57.896 [streaming-app-f2457059-c9ec-4c21-a177-be54f8d59cb2-StreamThread-2] INFO o.a.k.c.c.i.ConsumerCoordinator - [Consumer clientId=streaming-app-f2457059-c9ec-4c21-a177-be54f8d59cb2-StreamThread-2-consumer, groupId=streaming-app] Found no committed offset for partition traces-1

[HDFS connector + Kafka]How to write multiple topics in standalone mode?

I am using Confluent's HDFS Connector to write streamed data to HDFS. I followed the user manual and quick start and setup my Connector.
It works properly when i consume only one topic.
My property file looks like this
name=hdfs-sink
connector.class=io.confluent.connect.hdfs.HdfsSinkConnector
tasks.max=1
topics=test_topic1
hdfs.url=hdfs://localhost:9000
flush.size=30
When i add more than one topic, i see it continuously committing offsets and i do not see it writing the committed messages.
name=hdfs-sink
connector.class=io.confluent.connect.hdfs.HdfsSinkConnector
tasks.max=2
topics=test_topic1,test_topic2
hdfs.url=hdfs://localhost:9000
flush.size=30
I tried with tasks.max with 1 and 2.
I continuously get Committing offsets logged as below
[2016-10-26 15:21:30,990] INFO Started recovery for topic partition test_topic1-0 (io.confluent.connect.hdfs.TopicPartitionWriter:193)
[2016-10-26 15:21:31,222] INFO Finished recovery for topic partition test_topic1-0 (io.confluent.connect.hdfs.TopicPartitionWriter:208)
[2016-10-26 15:21:31,230] INFO Started recovery for topic partition test_topic2-0 (io.confluent.connect.hdfs.TopicPartitionWriter:193)
[2016-10-26 15:21:31,236] INFO Finished recovery for topic partition test_topic2-0 (io.confluent.connect.hdfs.TopicPartitionWriter:208)
[2016-10-26 15:21:35,155] INFO Reflections took 6962 ms to scan 249 urls, producing 11712 keys and 77746 values (org.reflections.Reflections:229)
[2016-10-26 15:22:29,226] INFO WorkerSinkTask{id=hdfs-sink-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSinkTask:261)
[2016-10-26 15:23:29,227] INFO WorkerSinkTask{id=hdfs-sink-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSinkTask:261)
[2016-10-26 15:24:29,225] INFO WorkerSinkTask{id=hdfs-sink-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSinkTask:261)
[2016-10-26 15:25:29,224] INFO WorkerSinkTask{id=hdfs-sink-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSinkTask:261)
When i gracefully stop the service (Ctrl+C), i see it removing the tmp files.
What am i doing wrong? What is the proper way to do it?
Appreciate any suggestions on this.
I've kept stumbling over the same problem you've mentioned here for the past month or so and I couldn't get to the bottom of it, until today when I've upgraded to confluent 3.1.1 and stuff started working as expected...
This is how I roll
name=hdfs-sink
connector.class=io.confluent.connect.hdfs.HdfsSinkConnector
tasks.max=5
topics=accounts,contacts,users
hdfs.url=hdfs://localhost:9000
flush.size=1
hive.metastore.uris=thrift://localhost:9083
hive.integration=true
schema.compatibility=BACKWARD
format.class=io.confluent.connect.hdfs.parquet.ParquetFormat
partitioner.class=io.confluent.connect.hdfs.partitioner.HourlyPartitioner
locale=en-us
timezone=UTC

HDFS Replication Issue

I have set up a single node cluster (initially) and am attempting to write a file from a client outside the cluster. While the write call returns, the close call hangs for a very long time, eventually returns, but the resulting file in HDFS is 0 bytes in length. The log says:
2016-10-03 22:01:41,367 INFO BlockStateChange: chooseUnderReplicatedBlocks selected 1 blocks at priority level 0; Total=1 Reset bookmarks? true
2016-10-03 22:01:41,367 INFO BlockStateChange: BLOCK* neededReplications = 1, pendingReplications = 0.
2016-10-03 22:01:41,367 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Blocks chosen but could not be replicated = 1; of which 1 have no target, 0 have no source, 0 are UC, 0 are abandoned, 0 already have enough replicas.
Why is the block not written to the single datanode (same as namenode)? What does it mean to "have no target"? The replication count is 1 and I would have thought that a single copy of the file would be stored on the single cluster node.

Resources