I got error Failed to instantiate Partitioner class when I try to create s3 source connector. What was done:
Installed confluent-hub and confluentinc/kafka-connect-s3-source, CLASSHPATH was exported. (1.0.1 is latest version)
$ confluent-hub install --no-prompt confluentinc/kafka-connect-s3-source:1.0.1
$ export CLASSPATH=/connector/share/confluent-hub-components/confluentinc-kafka-connect-s3-source/lib/*
Connectors settings are default from documentation (connector.properties)
name=s3-source
tasks.max=1
connector.class=io.confluent.connect.s3.source.S3SourceConnector
s3.bucket.name=confluent-kafka-connect-s3-testing
format.class=io.confluent.connect.s3.format.avro.AvroFormat
confluent.license=
confluent.topic.bootstrap.servers=kafka:9092
confluent.topic.replication.factor=1
transforms=AddPrefix
transforms.AddPrefix.type=org.apache.kafka.connect.transforms.RegexRouter
transforms.AddPrefix.regex=.*
transforms.AddPrefix.replacement=copy_of_$0
Detailed error
$ connect-standalone.sh worker.properties connector.properties
[2019-10-16 12:36:02,410] INFO Kafka version: 2.3.0 (org.apache.kafka.common.utils.AppInfoParser:117)
[2019-10-16 12:36:02,411] INFO Kafka commitId: fc1aaa116b661c8a (org.apache.kafka.common.utils.AppInfoParser:118)
[2019-10-16 12:36:02,412] INFO Kafka startTimeMs: 1571229362410 (org.apache.kafka.common.utils.AppInfoParser:119)
[2019-10-16 12:36:02,675] INFO License for single cluster, single node (io.confluent.license.LicenseManager:417)
[2019-10-16 12:36:02,683] INFO Closing License Store (io.confluent.license.LicenseStore:197)
[2019-10-16 12:36:02,683] INFO Stopping KafkaBasedLog for topic _confluent-command (org.apache.kafka.connect.util.KafkaBasedLog:164)
[2019-10-16 12:36:02,686] INFO [Producer clientId=s3-source-license-manager] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms. (org.apache.kafka.clients.producer.KafkaProducer:1153)
[2019-10-16 12:36:02,701] INFO Stopped KafkaBasedLog for topic _confluent-command (org.apache.kafka.connect.util.KafkaBasedLog:190)
[2019-10-16 12:36:02,702] INFO Closed License Store (io.confluent.license.LicenseStore:199)
[2019-10-16 12:36:02,704] ERROR WorkerConnector{id=s3-source} Error while starting connector (org.apache.kafka.connect.runtime.WorkerConnector:119)
org.apache.kafka.connect.errors.ConnectException: Failed to instantiate Partitioner class
at io.confluent.connect.s3.source.S3SourceConnectorConfig.getPartitioner(S3SourceConnectorConfig.java:612)
at io.confluent.connect.s3.source.S3SourceConnector.doStart(S3SourceConnector.java:94)
at io.confluent.connect.s3.source.S3SourceConnector.start(S3SourceConnector.java:86)
at org.apache.kafka.connect.runtime.WorkerConnector.doStart(WorkerConnector.java:111)
at org.apache.kafka.connect.runtime.WorkerConnector.start(WorkerConnector.java:136)
at org.apache.kafka.connect.runtime.WorkerConnector.transitionTo(WorkerConnector.java:196)
at org.apache.kafka.connect.runtime.Worker.startConnector(Worker.java:252)
at org.apache.kafka.connect.runtime.standalone.StandaloneHerder.startConnector(StandaloneHerder.java:293)
at org.apache.kafka.connect.runtime.standalone.StandaloneHerder.putConnectorConfig(StandaloneHerder.java:209)
at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:115)
I am unfamiliar with Java but now I trying to look into source code inside jars. Any help would be helpful and thanks in advance.
It sounds like your installation is a bit screwy. To run Kafka Connect under Docker you should use a dedicated image such as confluentinc/cp-kafka-connect.
To see an example of Kafka Connect deployed with Docker have a look at http://rmoff.dev/bbuzz19_demo-code and accompanying talk and slides
Related
I am new to Kafka and trying to install and run console consumer but getting error java.lang.IllegalStateException: There are no in-flight requests for node -1
Environment I tried on is as below
Kafka Version kafka_2.13-2.6.0
MacOS java11 Fails
MacOS java 1.8 Fails
Windows 10 Java11 Success
Below are the steps in details which I am performing. The same steps works on Windows.
STEP 1 Download KAFKA
I just downloaded the Kafka from https://www.apache.org/dyn/closer.cgi?path=/kafka/2.6.0/kafka_2.13-2.6.0.tgz
STEP 2 Start zookeeper service, which works fine.
I am starting zookeeper with below command
bin % zookeeper-server-start.sh ../config/zookeeper.properties
here is the zookeeper log
[2020-08-30 14:28:52,234] INFO Server environment:java.io.tmpdir=/var/folders/q7/khp8p9k14rzfs_zl52m57hlr0000gn/T/ (org.apache.zookeeper.server.ZooKeeperServer)
[2020-08-30 14:28:52,234] INFO Server environment:java.compiler=<NA> (org.apache.zookeeper.server.ZooKeeperServer)
[2020-08-30 14:28:52,234] INFO Server environment:os.name=Mac OS X (org.apache.zookeeper.server.ZooKeeperServer)
[2020-08-30 14:28:52,234] INFO Server environment:os.arch=x86_64 (org.apache.zookeeper.server.ZooKeeperServer)
[2020-08-30 14:28:52,234] INFO Server environment:os.version=10.15.6 (org.apache.zookeeper.server.ZooKeeperServer)
[2020-08-30 14:28:52,234] INFO Server environment:user.name=jigarnaik (org.apache.zookeeper.server.ZooKeeperServer)
[2020-08-30 14:28:52,234] INFO Server environment:user.home=/Users/jigarnaik (org.apache.zookeeper.server.ZooKeeperServer)
[2020-08-30 14:28:52,234] INFO Server environment:user.dir=/Users/jigarnaik/Documents/kafka_2.13-2.6.0/bin (org.apache.zookeeper.server.ZooKeeperServer)
[2020-08-30 14:28:52,234] INFO Server environment:os.memory.free=496MB (org.apache.zookeeper.server.ZooKeeperServer)
[2020-08-30 14:28:52,234] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer)
[2020-08-30 14:28:52,234] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer)
[2020-08-30 14:28:52,235] INFO minSessionTimeout set to 6000 (org.apache.zookeeper.server.ZooKeeperServer)
[2020-08-30 14:28:52,235] INFO maxSessionTimeout set to 60000 (org.apache.zookeeper.server.ZooKeeperServer)
[2020-08-30 14:28:52,236] INFO Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 60000 datadir /tmp/zookeeper/version-2 snapdir /tmp/zookeeper/version-2 (org.apache.zookeeper.server.ZooKeeperServer)
[2020-08-30 14:28:52,243] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory)
[2020-08-30 14:28:52,247] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 32 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2020-08-30 14:28:52,254] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2020-08-30 14:28:52,268] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase)
[2020-08-30 14:28:52,271] INFO Snapshotting: 0x0 to /tmp/zookeeper/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog)
[2020-08-30 14:28:52,273] INFO Snapshotting: 0x0 to /tmp/zookeeper/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog)
[2020-08-30 14:28:52,286] INFO Using checkIntervalMs=60000 maxPerMinute=10000 (org.apache.zookeeper.server.ContainerManager)
load: 1.58 cmd: java 17764 waiting 0.68u 0.11s
STEP 3 start Kafka - the logs look fine to me.
After that I am starting Kafka using below command
bin % kafka-server-start.sh ../config/server.properties
Kafka startup looks fine with below log tail
[2020-08-30 14:33:41,708] INFO [TransactionCoordinator id=0] Starting up. (kafka.coordinator.transaction.TransactionCoordinator)
[2020-08-30 14:33:41,709] INFO [Transaction Marker Channel Manager 0]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)
[2020-08-30 14:33:41,709] INFO [TransactionCoordinator id=0] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
[2020-08-30 14:33:41,728] INFO [ExpirationReaper-0-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-08-30 14:33:41,757] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
[2020-08-30 14:33:41,770] INFO [SocketServer brokerId=0] Starting socket server acceptors and processors (kafka.network.SocketServer)
[2020-08-30 14:33:41,773] INFO [SocketServer brokerId=0] Started data-plane acceptor and processor(s) for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer)
[2020-08-30 14:33:41,773] INFO [SocketServer brokerId=0] Started socket server acceptors and processors (kafka.network.SocketServer)
[2020-08-30 14:33:41,775] INFO Kafka version: 2.6.0 (org.apache.kafka.common.utils.AppInfoParser)
[2020-08-30 14:33:41,775] INFO Kafka commitId: 62abe01bee039651 (org.apache.kafka.common.utils.AppInfoParser)
[2020-08-30 14:33:41,775] INFO Kafka startTimeMs: 1598769221773 (org.apache.kafka.common.utils.AppInfoParser)
[2020-08-30 14:33:41,776] INFO [KafkaServer id=0] started (kafka.server.KafkaServer)
[2020-08-30 14:33:41,808] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions Set(word-count-output-0, word-count-input-0) (kafka.server.ReplicaFetcherManager)
[2020-08-30 14:33:41,815] INFO [Partition word-count-output-0 broker=0] Log loaded for partition word-count-output-0 with initial high watermark 0 (kafka.cluster.Partition)
[2020-08-30 14:33:41,821] INFO [Partition word-count-input-0 broker=0] Log loaded for partition word-count-input-0 with initial high watermark 0 (kafka.cluster.Partition)
STEP 4 Create topic, topic created successfully.
after which I. am creating topic
bin % kafka-topics.sh \
--create \
--zookeeper localhost:2181 \
--replication-factor 1 \
--partitions 1 \
--topic my-topic
Created topic my-topic.
list topic
bin % kafka-topics.sh \
--list \
--zookeeper localhost:2181
my-topic
word-count-input
word-count-output
STEP 5 Run console producer, fails with below error.
but when I try to start the console producer using below command, I am getting exception java.lang.IllegalStateException: There are no in-flight requests for node -1 The same step when I run on windows it works fine, I am getting issue only on Mac.
bin % kafka-console-producer.sh \
--topic word-count-input \
--bootstrap-server localhost:9092
>[2020-08-30 14:35:17,902] ERROR [Producer clientId=console-producer] Uncaught error in kafka producer I/O thread: (org.apache.kafka.clients.producer.internals.Sender)
java.nio.BufferUnderflowException
at java.nio.Buffer.nextGetIndex(Buffer.java:509)
at java.nio.HeapByteBuffer.getInt(HeapByteBuffer.java:364)
at org.apache.kafka.common.protocol.ByteBufferAccessor.readInt(ByteBufferAccessor.java:43)
at org.apache.kafka.common.message.ResponseHeaderData.read(ResponseHeaderData.java:102)
at org.apache.kafka.common.message.ResponseHeaderData.<init>(ResponseHeaderData.java:70)
at org.apache.kafka.common.requests.ResponseHeader.parse(ResponseHeader.java:66)
at org.apache.kafka.clients.NetworkClient.parseStructMaybeUpdateThrottleTimeMetrics(NetworkClient.java:717)
at org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:834)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:553)
at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:325)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:240)
at java.lang.Thread.run(Thread.java:748)
[2020-08-30 14:35:17,904] ERROR [Producer clientId=console-producer] Uncaught error in kafka producer I/O thread: (org.apache.kafka.clients.producer.internals.Sender)
java.lang.IllegalStateException: There are no in-flight requests for node -1
at org.apache.kafka.clients.InFlightRequests.requestQueue(InFlightRequests.java:62)
at org.apache.kafka.clients.InFlightRequests.completeNext(InFlightRequests.java:70)
at org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:833)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:553)
at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:325)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:240)
at java.lang.Thread.run(Thread.java:748)
[2020-08-30 14:35:17,938] ERROR Uncaught exception in thread 'kafka-producer-network-thread | console-producer': (org.apache.kafka.common.utils.KafkaThread)
java.lang.OutOfMemoryError: Java heap space
at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57)
at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
at org.apache.kafka.common.memory.MemoryPool$1.tryAllocate(MemoryPool.java:30)
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:113)
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:447)
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:397)
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:678)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:580)
at org.apache.kafka.common.network.Selector.poll(Selector.java:485)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:544)
at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:325)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:240)
at java.lang.Thread.run(Thread.java:748)
Finally I found the issue, the port was being used by sonarqube running on my local but surprisingly when I started kafka there was no error reported.
Honestly, I don't know how Firewall/Antivirus played some role but somehow its linked to this problem.
In my case, Consumer was able to consume messages only once but it throws exception BufferUnderflowException when I started the consumer again.
Turning-Off Firewall/Antivirus was't option for me so I added/enabled advertised.listeners=PLAINTEXT://localhost:9092 configuration in server.properties of kafka-server and afterwards it works as expected.
kafka version: 2.6.0
My norton antivirus firewall settings was causing this issue. I disabled firewall settings and everything started working fine.
I got the same issue. It was happening because sonarqube uses a second port 9092. To avoid this, I suggest to use the sonarqube docker image instead of downloading locally.
Nifi can running normally before until today,but i don't change any configuration.
Any help is apppreciated!
LOG:
ERROR [main-EventThread] o.a.c.framework.imps.EnsembleTracker Invalid config event received: {server.1=z01:2888:3888:participant, version=0, server.3=z03:2888:3888:participant, server.2=z02:2888:3888:participant}
Find this info,but it is helpless for me
https://issues.apache.org/jira/browse/NIFI-7148
I got the same error in my local zookeeper client environment when zookeeper ensembe is version 3.4.8 version and the client is 3.6.2 or 3.5.x.
When I upgraded the client zookeeper version to be same as ensemble version the error is gone.
I am trying to install Cassandra 2.2.0, but the screen is getting hung with the below message -
INFO 14:55:12 Starting listening for CQL clients on localhost/127.0.0.1:9042...
INFO 14:55:12 Binding thrift service to localhost/127.0.0.1:19160
INFO 14:55:12 Listening for thrift clients...
As a workaround I tried;
1) updating rpc_port to : 19160 from current :9160
2) start_rpc: true from existing start_rpc: false
I am trying to kerberize the AWS EMR cluster. I have enabled hadoop security, created the kerberos principals and deployed them on all the nodes.
However, when I start the namenode using the command 'sudo start hadoop-hdfs-namenode' following exception is thrown.
2016-06-08 06:14:06,515 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor (main): Number of failed storage changes from 0 to 0
2016-06-08 06:14:06,515 INFO org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager (org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor#ac4860): Updating block keys
2016-06-08 06:14:06,544 INFO org.apache.hadoop.ipc.Server (IPC Server Responder): IPC Server Responder: starting
2016-06-08 06:14:06,544 INFO org.apache.hadoop.ipc.Server (IPC Server listener on 8020): IPC Server listener on 8020: starting
2016-06-08 06:14:06,560 INFO org.apache.hadoop.hdfs.server.namenode.NameNode (main): NameNode RPC up at: ip-172-31-21-213.ap-southeast-1.compute.internal/172.31.21.213:8020
2016-06-08 06:14:06,560 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem (main): Starting services required for active state
2016-06-08 06:14:06,564 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor (CacheReplicationMonitor(443740501)): Starting CacheReplicationMonitor with interval 30000 milliseconds
2016-06-08 06:14:06,763 INFO org.apache.hadoop.ipc.Server (Socket Reader #1 for port 8020): Socket Reader #1 for port 8020: readAndProcess from client 172.31.21.213 threw exception [org.apache.hadoop.security.AccessControlException: SIMPLE authentication is not enabled. Available:[TOKEN, KERBEROS]]
org.apache.hadoop.security.AccessControlException: SIMPLE authentication is not enabled. Available:[TOKEN, KERBEROS]
at org.apache.hadoop.ipc.Server$Connection.initializeAuthContext(Server.java:1564)
at org.apache.hadoop.ipc.Server$Connection.readAndProcess(Server.java:1520)
at org.apache.hadoop.ipc.Server$Listener.doRead(Server.java:771)
at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:637)
at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:608)
Kindly help me in this regard. Thanks in advance.
The client doesn't think security is enabled; it's only trying to use "SIMPLE" auth -the caller is who they say they are. The server will only take Kerberos tickets or Hadoop delegation tokens previous acquired by by a caller with a valid Kerberos ticket.
I am trying to use Apache Flume for saving tweets to my HDFS. I am currently using the Cloudera image with Hadoop and Flume. I was following the tutorial from Cloudera's blog, but I am not able to connect to the Twitter API.
I am getting following error:
2014-03-14 09:43:14,021 INFO org.apache.flume.node.Application: Waiting for channel: MemChannel to start. Sleeping for 500 ms
2014-03-14 09:43:14,069 INFO org.apache.flume.instrumentation.MonitoredCounterGroup: Monitored counter group for type: CHANNEL, name: MemChannel: Successfully registered new MBean.
2014-03-14 09:43:14,069 INFO org.apache.flume.instrumentation.MonitoredCounterGroup: Component type: CHANNEL, name: MemChannel started
2014-03-14 09:43:14,522 INFO org.apache.flume.node.Application: Starting Sink HDFS
2014-03-14 09:43:14,522 INFO org.apache.flume.node.Application: Starting Source Twitter
2014-03-14 09:43:14,525 INFO org.apache.flume.instrumentation.MonitoredCounterGroup: Monitored counter group for type: SINK, name: HDFS: Successfully registered new MBean.
2014-03-14 09:43:14,525 INFO org.apache.flume.instrumentation.MonitoredCounterGroup: Component type: SINK, name: HDFS started
2014-03-14 09:43:14,595 INFO twitter4j.TwitterStreamImpl: Establishing connection.
2014-03-14 09:43:14,680 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2014-03-14 09:43:14,823 INFO org.mortbay.log: jetty-6.1.26
2014-03-14 09:43:14,946 INFO org.mortbay.log: Started SocketConnector#0.0.0.0:41414
2014-03-14 09:43:16,249 INFO twitter4j.TwitterStreamImpl: 401:Authentication credentials (https://dev.twitter.com/pages/auth) were missing or incorrect. Ensure that you have set valid consumer key/secret, access token/secret, and the system clock is in sync.
HTTP ERROR: 401
Problem accessing '/1.1/statuses/filter.json'. Reason:
Unauthorized
2014-03-14 09:43:16,249 INFO twitter4j.TwitterStreamImpl: Waiting for 10000 milliseconds
2014-03-14 09:43:26,251 INFO twitter4j.TwitterStreamImpl: Establishing
I have copied my twitter API credentials to the flume.conf (I have tried in both on disc and web UI). I have also tried to regenerate them again and copy those new ones, but it didn't help me.
My pom.xml contains:
<dependency>
<groupId>org.twitter4j</groupId>
<artifactId>twitter4j-stream</artifactId>
<version>3.0.5</version>
</dependency>
That means that there shouldn't be the problem that is described here.
And I have also set the system time by command:
sudo ntpdate pool.ntp.org
Does anybody have some idea of what can possibly be wrong?
Thank you very much in advance for any suggestions and help.
Try upgrading to Twitter4J 3.0.6.. I resolved a similar issue by upgrading to 3.0.6
Update:
Its because of invalid consumer key/secret, access token/secret, and make sure to have the system clock is in sync."