Unable to load class oracle.jdbc.OracleDriver in apache-flume - oracle

Im trying to make flume-ng-sql-source work with Apache Flume so I can stream Oracle DB to Kafka.
Find a basic tutorial here https://www.toadworld.com/platforms/oracle/w/wiki/11524.streaming-oracle-database-table-data-to-apache-kafka
Im using the following version
Flume 1.8.0, flume-ng-sql-source 1.4.4 and ojdbc7.jar
Now when I try to lauch the agent I get the error
org.hibernate.boot.registry.classloading.spi.ClassLoadingException: Unable to load class [oracle.jdbc.OracleDriver]
Here is my flume.conf
agent.channels = ch1
agent.sinks = kafkaSink
agent.sources = sql-source
agent.channels.ch1.type = memory
agent.channels.ch1.capacity = 1000000
agent.sources.sql-source.channels = ch1
agent.sources.sql-source.type = org.keedio.flume.source.SQLSource
# URL to connect to database
agent.sources.sql-source.hibernate.connection.url = jdbc:oracle:thin:#myoracleserver:1521:mydb
# Database connection properties
agent.sources.sql-source.hibernate.connection.user = user
agent.sources.sql-source.hibernate.connection.password = pass
agent.sources.sql-source.hibernate.dialect = org.hibernate.dialect.Oracle12cDialect
agent.sources.sql-source.hibernate.connection.driver_class = oracle.jdbc.OracleDriver
agent.sources.sql-source.table = SCHEMA.TABLE
agent.sources.sql-source.columns.to.select = *
agent.sources.sql-source.status.file.name = sql-source.status
# Increment column properties
agent.sources.sql-source.incremental.column.name = id
# Increment value is from you want to start taking data from tables (0 will import entire table)
agent.sources.sql-source.incremental.value = 0
# Query delay, each configured milisecond the query will be sent
agent.sources.sql-source.run.query.delay=10000
# Status file is used to save last readed row
agent.sources.sql-source.status.file.path = /var/lib/flume
agent.sources.sql-source.status.file.name = sql-source.status
agent.sinks.kafkaSink.type=org.apache.flume.sink.kafka.KafkaSink
agent.sinks.kafkaSink.brokerList=localhost:9092
agent.sinks.kafkaSink.topic=oradb
agent.sinks.kafkaSink.channel=ch1
agent.sinks.kafkaSink.batchSize=10
Here is my flume-env.sh
FLUME_CLASSPATH="/home/mache84/flume/apache-flume-1.8.0-bin/lib"
Here is the output when I launch flume
[mache84#localhost flume]$ apache-flume-1.8.0-bin/bin/flume-ng agent --conf /home/mache84/flume/apache-flume-1.8.0-bin/conf -f /home/mache84/flume/apache-flume-1.8.0-bin/conf/flume.conf -n agent -Dflume.root.logger=INFO,console
Info: Sourcing environment configuration script /home/mache84/flume/apache-flume-1.8.0-bin/conf/flume-env.sh
Info: Including Hive libraries found via () for Hive access
+ exec /usr/lib/jvm/jre-1.8.0-openjdk/bin/java -Xmx20m -Dflume.root.logger=INFO,console -cp '/home/mache84/flume/apache-flume-1.8.0-bin/conf:/home/mache84/flume/apache-flume-1.8.0-bin/lib/*:/home/mache84/flume/apache-flume-1.8.0-bin/lib:/home/mache84/flume/apache-flume-1.8.0-bin/plugins.d/sql-source/lib/*:/home/mache84/flume/apache-flume-1.8.0-bin/plugins.d/sql-source/libext/*:/lib/*' -Djava.library.path= org.apache.flume.node.Application -f /home/mache84/flume/apache-flume-1.8.0-bin/conf/flume.conf -n agent
2017-11-10 10:54:16,220 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.node.PollingPropertiesFileConfigurationProvider.start(PollingPropertiesFileConfigurationProvider.java:62)] Configuration provider starting
2017-11-10 10:54:16,223 (conf-file-poller-0) [INFO - org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:134)] Reloading configuration file:/home/mache84/flume/apache-flume-1.8.0-bin/conf/flume.conf
2017-11-10 10:54:16,228 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1016)] Processing:kafkaSink
2017-11-10 10:54:16,229 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1016)] Processing:kafkaSink
2017-11-10 10:54:16,229 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:930)] Added sinks: kafkaSink Agent: agent
2017-11-10 10:54:16,229 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1016)] Processing:kafkaSink
2017-11-10 10:54:16,229 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1016)] Processing:kafkaSink
2017-11-10 10:54:16,229 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1016)] Processing:kafkaSink
2017-11-10 10:54:16,236 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration.validateConfiguration(FlumeConfiguration.java:140)] Post-validation flume configuration contains configuration for agents: [agent]
2017-11-10 10:54:16,236 (conf-file-poller-0) [INFO - org.apache.flume.node.AbstractConfigurationProvider.loadChannels(AbstractConfigurationProvider.java:147)] Creating channels
2017-11-10 10:54:16,242 (conf-file-poller-0) [INFO - org.apache.flume.channel.DefaultChannelFactory.create(DefaultChannelFactory.java:42)] Creating instance of channel ch1 type memory
2017-11-10 10:54:16,247 (conf-file-poller-0) [INFO - org.apache.flume.node.AbstractConfigurationProvider.loadChannels(AbstractConfigurationProvider.java:201)] Created channel ch1
2017-11-10 10:54:16,248 (conf-file-poller-0) [INFO - org.apache.flume.source.DefaultSourceFactory.create(DefaultSourceFactory.java:41)] Creating instance of source sql-source, type org.keedio.flume.source.SQLSource
2017-11-10 10:54:16,250 (conf-file-poller-0) [INFO - org.keedio.flume.source.SQLSource.configure(SQLSource.java:63)] Reading and processing configuration values for source sql-source
2017-11-10 10:54:16,365 (conf-file-poller-0) [INFO - org.hibernate.annotations.common.reflection.java.JavaReflectionManager.<clinit>(JavaReflectionManager.java:66)] HCANN000001: Hibernate Commons Annotations {4.0.5.Final}
2017-11-10 10:54:16,370 (conf-file-poller-0) [INFO - org.hibernate.Version.logVersion(Version.java:54)] HHH000412: Hibernate Core {4.3.10.Final}
2017-11-10 10:54:16,372 (conf-file-poller-0) [INFO - org.hibernate.cfg.Environment.<clinit>(Environment.java:239)] HHH000206: hibernate.properties not found
2017-11-10 10:54:16,373 (conf-file-poller-0) [INFO - org.hibernate.cfg.Environment.buildBytecodeProvider(Environment.java:346)] HHH000021: Bytecode provider name : javassist
2017-11-10 10:54:16,393 (conf-file-poller-0) [INFO - org.keedio.flume.source.HibernateHelper.establishSession(HibernateHelper.java:63)] Opening hibernate session
2017-11-10 10:54:16,472 (conf-file-poller-0) [WARN - org.hibernate.engine.jdbc.connections.internal.DriverManagerConnectionProviderImpl.configure(DriverManagerConnectionProviderImpl.java:93)] HHH000402: Using Hibernate built-in connection pool (not for production use!)
2017-11-10 10:54:16,473 (conf-file-poller-0) [ERROR - org.apache.flume.node.AbstractConfigurationProvider.loadSources(AbstractConfigurationProvider.java:361)] Source sql-source has been removed due to an error during configuration
org.hibernate.boot.registry.classloading.spi.ClassLoadingException: Unable to load class [oracle.jdbc.OracleDriver]
at org.hibernate.boot.registry.classloading.internal.ClassLoaderServiceImpl.classForName(ClassLoaderServiceImpl.java:245)
at org.hibernate.engine.jdbc.connections.internal.DriverManagerConnectionProviderImpl.loadDriverIfPossible(DriverManagerConnectionProviderImpl.java:200)
at org.hibernate.engine.jdbc.connections.internal.DriverManagerConnectionProviderImpl.buildCreator(DriverManagerConnectionProviderImpl.java:156)
at org.hibernate.engine.jdbc.connections.internal.DriverManagerConnectionProviderImpl.configure(DriverManagerConnectionProviderImpl.java:95)
at org.hibernate.boot.registry.internal.StandardServiceRegistryImpl.configureService(StandardServiceRegistryImpl.java:111)
at org.hibernate.service.internal.AbstractServiceRegistryImpl.initializeService(AbstractServiceRegistryImpl.java:234)
at org.hibernate.service.internal.AbstractServiceRegistryImpl.getService(AbstractServiceRegistryImpl.java:206)
at org.hibernate.engine.jdbc.internal.JdbcServicesImpl.buildJdbcConnectionAccess(JdbcServicesImpl.java:260)
at org.hibernate.engine.jdbc.internal.JdbcServicesImpl.configure(JdbcServicesImpl.java:94)
at org.hibernate.boot.registry.internal.StandardServiceRegistryImpl.configureService(StandardServiceRegistryImpl.java:111)
at org.hibernate.service.internal.AbstractServiceRegistryImpl.initializeService(AbstractServiceRegistryImpl.java:234)
at org.hibernate.service.internal.AbstractServiceRegistryImpl.getService(AbstractServiceRegistryImpl.java:206)
at org.hibernate.cfg.Configuration.buildTypeRegistrations(Configuration.java:1887)
at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1845)
at org.keedio.flume.source.HibernateHelper.establishSession(HibernateHelper.java:67)
at org.keedio.flume.source.SQLSource.configure(SQLSource.java:73)
at org.apache.flume.conf.Configurables.configure(Configurables.java:41)
at org.apache.flume.node.AbstractConfigurationProvider.loadSources(AbstractConfigurationProvider.java:326)
at org.apache.flume.node.AbstractConfigurationProvider.getConfiguration(AbstractConfigurationProvider.java:101)
at org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:141)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: Could not load requested class : oracle.jdbc.OracleDriver
at org.hibernate.boot.registry.classloading.internal.ClassLoaderServiceImpl$AggregatedClassLoader.findClass(ClassLoaderServiceImpl.java:230)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.hibernate.boot.registry.classloading.internal.ClassLoaderServiceImpl.classForName(ClassLoaderServiceImpl.java:242)
... 26 more
2017-11-10 10:54:16,475 (conf-file-poller-0) [INFO - org.apache.flume.sink.DefaultSinkFactory.create(DefaultSinkFactory.java:42)] Creating instance of sink: kafkaSink, type: org.apache.flume.sink.kafka.KafkaSink
2017-11-10 10:54:16,479 (conf-file-poller-0) [WARN - org.apache.flume.sink.kafka.KafkaSink.translateOldProps(KafkaSink.java:363)] topic is deprecated. Please use the parameter kafka.topic
2017-11-10 10:54:16,479 (conf-file-poller-0) [WARN - org.apache.flume.sink.kafka.KafkaSink.translateOldProps(KafkaSink.java:374)] brokerList is deprecated. Please use the parameter kafka.bootstrap.servers
2017-11-10 10:54:16,479 (conf-file-poller-0) [WARN - org.apache.flume.sink.kafka.KafkaSink.translateOldProps(KafkaSink.java:384)] batchSize is deprecated. Please use the parameter flumeBatchSize
2017-11-10 10:54:16,480 (conf-file-poller-0) [INFO - org.apache.flume.sink.kafka.KafkaSink.configure(KafkaSink.java:314)] Using the static topic oradb. This may be overridden by event headers
2017-11-10 10:54:16,484 (conf-file-poller-0) [INFO - org.apache.flume.node.AbstractConfigurationProvider.getConfiguration(AbstractConfigurationProvider.java:116)] Channel ch1 connected to [kafkaSink]
2017-11-10 10:54:16,488 (conf-file-poller-0) [INFO - org.apache.flume.node.Application.startAllComponents(Application.java:137)] Starting new configuration:{ sourceRunners:{} sinkRunners:{kafkaSink=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor#6542f090 counterGroup:{ name:null counters:{} } }} channels:{ch1=org.apache.flume.channel.MemoryChannel{name: ch1}} }
2017-11-10 10:54:16,488 (conf-file-poller-0) [INFO - org.apache.flume.node.Application.startAllComponents(Application.java:144)] Starting Channel ch1
2017-11-10 10:54:16,489 (conf-file-poller-0) [INFO - org.apache.flume.node.Application.startAllComponents(Application.java:159)] Waiting for channel: ch1 to start. Sleeping for 500 ms
2017-11-10 10:54:16,524 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.register(MonitoredCounterGroup.java:119)] Monitored counter group for type: CHANNEL, name: ch1: Successfully registered new MBean.
2017-11-10 10:54:16,525 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:95)] Component type: CHANNEL, name: ch1 started
2017-11-10 10:54:16,990 (conf-file-poller-0) [INFO - org.apache.flume.node.Application.startAllComponents(Application.java:171)] Starting Sink kafkaSink
2017-11-10 10:54:17,033 (lifecycleSupervisor-1-1) [INFO - org.apache.kafka.common.config.AbstractConfig.logAll(AbstractConfig.java:165)] ProducerConfig values:
compression.type = none
metric.reporters = []
metadata.max.age.ms = 300000
metadata.fetch.timeout.ms = 60000
reconnect.backoff.ms = 50
sasl.kerberos.ticket.renew.window.factor = 0.8
bootstrap.servers = [localhost:9092]
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
buffer.memory = 33554432
timeout.ms = 30000
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.keystore.type = JKS
ssl.trustmanager.algorithm = PKIX
block.on.buffer.full = false
ssl.key.password = null
max.block.ms = 60000
sasl.kerberos.min.time.before.relogin = 60000
connections.max.idle.ms = 540000
ssl.truststore.password = null
max.in.flight.requests.per.connection = 5
metrics.num.samples = 2
client.id =
ssl.endpoint.identification.algorithm = null
ssl.protocol = TLS
request.timeout.ms = 30000
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
acks = 1
batch.size = 16384
ssl.keystore.location = null
receive.buffer.bytes = 32768
ssl.cipher.suites = null
ssl.truststore.type = JKS
security.protocol = PLAINTEXT
retries = 0
max.request.size = 1048576
value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
ssl.truststore.location = null
ssl.keystore.password = null
ssl.keymanager.algorithm = SunX509
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
send.buffer.bytes = 131072
linger.ms = 0
2017-11-10 10:54:17,065 (lifecycleSupervisor-1-1) [INFO - org.apache.kafka.common.utils.AppInfoParser$AppInfo.<init>(AppInfoParser.java:82)] Kafka version : 0.9.0.1
2017-11-10 10:54:17,065 (lifecycleSupervisor-1-1) [INFO - org.apache.kafka.common.utils.AppInfoParser$AppInfo.<init>(AppInfoParser.java:83)] Kafka commitId : 23c69d62a0cabf06
2017-11-10 10:54:17,066 (lifecycleSupervisor-1-1) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.register(MonitoredCounterGroup.java:119)] Monitored counter group for type: SINK, name: kafkaSink: Successfully registered new MBean.
2017-11-10 10:54:17,066 (lifecycleSupervisor-1-1) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:95)] Component type: SINK, name: kafkaSink started
I have copied the lib ojdbc7.jar in
apache-flume-1.8.0-bin/lib and
apache-flume-1.8.0-bin/plugins.d/sql-source/libext/
This could be a very basic problem since I'm new to all this.
Thanks in advance

The problem I had was just that my ojdbc7.jar was a bad file.
I found a good one here https://mvnrepository.com/artifact/oracle/ojdbc6/11.2.0.3

Related

Trying to connect a containerized spring boot app with an containerized kafka server

I have a Spring Boot application which worked fine with Kafka in a container but when I containerize the Spring Boot application it won't work.
This is the docker-compose file with which I created
version: "3.4"
services:
zookeeper:
image: bitnami/zookeeper
restart: always
container_name: "zookeeper"
ports:
- "2181:2181"
volumes:
- "zookeeper_data:/bitnami"
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka:
image: bitnami/kafka
ports:
- "9092:9092"
restart: always
container_name: "kafka"
volumes:
- "kafka_data:/bitnami"
environment:
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
- KAFKA_LISTENERS=PLAINTEXT://:9092
- KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092
depends_on:
- zookeeper
volumes:
zookeeper_data:
driver: local
kafka_data:
driver: local
The application.yml of the Spring Boot application
server:
port: 5001
spring:
jpa:
database-platform: org.hibernate.dialect.MySQL8Dialect
show-sql: true
hibernate:
ddl-auto: update
datasource:
url: jdbc:mysql://mysql-container:3306/craproject?autoReconnect=true&useSSL=false&useSSL=false&serverTimezone=UTC&createDatabaseIfNotExist=true
username: ***
password: ****
data:
mongodb:
host: mongo-container
port: 27017
database: craprojet
kafka:
bootstrap-servers:
- kafka:9092
consumer:
group-id: project-group
enable-auto-commit: false
auto-offset-reset: latest
isolation-level: read_committed
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.springframework.kafka.support.serializer.JsonDeserializer
producer:
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: org.springframework.kafka.support.serializer.JsonSerializer
The dockerfile with which I created the image of the app:
FROM openjdk:11
COPY target/project-service-1.jar project-service-1.jar
EXPOSE 5001
ENTRYPOINT ["java", "-jar" , "project-service-1.jar"]
The containers of kafka and data bases are running fine :
This is the command which i use to run the spring boot app container :
docker run --name project-service\
--network techbankNet\
-p 5001:5001\
--link mysql-container:mysql\
--link mongo-container:mongo\
--link adminer:adminer\
--link kafka:kafka project-service
the log:
2022-08-23 19:05:05.170 INFO 1 --- [ main] o.a.k.clients.consumer.ConsumerConfig : ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [kafka:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = consumer-project-group-1
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = false
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = project-group
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_committed
key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 127000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class org.springframework.kafka.support.serializer.JsonDeserializer
2022-08-23 19:05:15.315 WARN 1 --- [ main] org.apache.kafka.clients.ClientUtils : Couldn't resolve server kafka:9092 from bootstrap.servers as DNS resolution failed for kafka
2022-08-23 19:05:15.316 INFO 1 --- [ main] org.apache.kafka.common.metrics.Metrics : Metrics scheduler closed
2022-08-23 19:05:15.316 INFO 1 --- [ main] org.apache.kafka.common.metrics.Metrics : Closing reporter org.apache.kafka.common.metrics.JmxReporter
2022-08-23 19:05:15.317 INFO 1 --- [ main] org.apache.kafka.common.metrics.Metrics : Metrics reporters closed
2022-08-23 19:05:15.319 INFO 1 --- [ main] o.a.kafka.common.utils.AppInfoParser : App info kafka.consumer for consumer-project-group-1 unregistered
2022-08-23 19:05:15.319 WARN 1 --- [ main] ConfigServletWebServerApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.context.ApplicationContextExcepti
on: Failed to start bean 'org.springframework.kafka.config.internalKafkaListenerEndpointRegistry'; nested exception is org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
2022-08-23 19:05:15.340 INFO 1 --- [ main] j.LocalContainerEntityManagerFactoryBean : Closing JPA EntityManagerFactory for persistence unit 'default'
2022-08-23 19:05:15.343 INFO 1 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Shutdown initiated...
2022-08-23 19:05:15.365 INFO 1 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Shutdown completed.
2022-08-23 19:05:15.368 INFO 1 --- [ main] o.apache.catalina.core.StandardService : Stopping service [Tomcat]
2022-08-23 19:05:15.389 INFO 1 --- [ main] ConditionEvaluationReportLoggingListener :
Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
2022-08-23 19:05:15.420 ERROR 1 --- [ main] o.s.boot.SpringApplication : Application run failed
org.springframework.context.ApplicationContextException: Failed to start bean 'org.springframework.kafka.config.internalKafkaListenerEndpointRegistry'; nested exception is org.apache.kafka.common.KafkaException: Failed to construct
kafka consumer
at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:181) ~[spring-context-5.3.9.jar!/:5.3.9]
at org.springframework.context.support.DefaultLifecycleProcessor.access$200(DefaultLifecycleProcessor.java:54) ~[spring-context-5.3.9.jar!/:5.3.9]
at org.springframework.context.support.DefaultLifecycleProcessor$LifecycleGroup.start(DefaultLifecycleProcessor.java:356) ~[spring-context-5.3.9.jar!/:5.3.9]
at java.base/java.lang.Iterable.forEach(Iterable.java:75) ~[na:na]
at org.springframework.context.support.DefaultLifecycleProcessor.startBeans(DefaultLifecycleProcessor.java:155) ~[spring-context-5.3.9.jar!/:5.3.9]
at org.springframework.context.support.DefaultLifecycleProcessor.onRefresh(DefaultLifecycleProcessor.java:123) ~[spring-context-5.3.9.jar!/:5.3.9]
at org.springframework.context.support.AbstractApplicationContext.finishRefresh(AbstractApplicationContext.java:935) ~[spring-context-5.3.9.jar!/:5.3.9]
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:586) ~[spring-context-5.3.9.jar!/:5.3.9]
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:145) ~[spring-boot-2.5.3.jar!/:2.5.3]
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:754) ~[spring-boot-2.5.3.jar!/:2.5.3]
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:434) ~[spring-boot-2.5.3.jar!/:2.5.3]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:338) ~[spring-boot-2.5.3.jar!/:2.5.3]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1343) ~[spring-boot-2.5.3.jar!/:2.5.3]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1332) ~[spring-boot-2.5.3.jar!/:2.5.3]
at com.project.CQRS.ProjectServiceApplication.main(ProjectServiceApplication.java:16) ~[classes!/:1]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:na]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:na]
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:na]
at java.base/java.lang.reflect.Method.invoke(Method.java:566) ~[na:na]
at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:49) ~[project-service-1.jar:1]
at org.springframework.boot.loader.Launcher.launch(Launcher.java:108) ~[project-service-1.jar:1]
at org.springframework.boot.loader.Launcher.launch(Launcher.java:58) ~[project-service-1.jar:1]
at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:88) ~[project-service-1.jar:1]
Caused by: org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:819) ~[kafka-clients-2.7.1.jar!/:na]
at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createRawConsumer(DefaultKafkaConsumerFactory.java:366) ~[spring-kafka-2.7.4.jar!/:2.7.4]
at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createKafkaConsumer(DefaultKafkaConsumerFactory.java:334) ~[spring-kafka-2.7.4.jar!/:2.7.4]
at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createConsumerWithAdjustedProperties(DefaultKafkaConsumerFactory.java:310) ~[spring-kafka-2.7.4.jar!/:2.7.4]
at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createKafkaConsumer(DefaultKafkaConsumerFactory.java:277) ~[spring-kafka-2.7.4.jar!/:2.7.4]
at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createConsumer(DefaultKafkaConsumerFactory.java:254) ~[spring-kafka-2.7.4.jar!/:2.7.4]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.<init>(KafkaMessageListenerContainer.java:715) ~[spring-kafka-2.7.4.jar!/:2.7.4]
at org.springframework.kafka.listener.KafkaMessageListenerContainer.doStart(KafkaMessageListenerContainer.java:320) ~[spring-kafka-2.7.4.jar!/:2.7.4]
at org.springframework.kafka.listener.AbstractMessageListenerContainer.start(AbstractMessageListenerContainer.java:397) ~[spring-kafka-2.7.4.jar!/:2.7.4]
at org.springframework.kafka.listener.ConcurrentMessageListenerContainer.doStart(ConcurrentMessageListenerContainer.java:205) ~[spring-kafka-2.7.4.jar!/:2.7.4]
at org.springframework.kafka.listener.AbstractMessageListenerContainer.start(AbstractMessageListenerContainer.java:397) ~[spring-kafka-2.7.4.jar!/:2.7.4]
at org.springframework.kafka.config.KafkaListenerEndpointRegistry.startIfNecessary(KafkaListenerEndpointRegistry.java:327) ~[spring-kafka-2.7.4.jar!/:2.7.4]
at org.springframework.kafka.config.KafkaListenerEndpointRegistry.start(KafkaListenerEndpointRegistry.java:272) ~[spring-kafka-2.7.4.jar!/:2.7.4]
at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:178) ~[spring-context-5.3.9.jar!/:5.3.9]
... 22 common frames omitted
Caused by: org.apache.kafka.common.config.ConfigException: No resolvable bootstrap urls given in bootstrap.servers
at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:89) ~[kafka-clients-2.7.1.jar!/:na]
at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:48) ~[kafka-clients-2.7.1.jar!/:na]
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:728) ~[kafka-clients-2.7.1.jar!/:na]
You should put project-service (and adminer, mongo, and mysql) in the same Docker Compose file as Kafka and not use docker run.
This will create a default bridge network where the containers can talk to each other.
Or you need to attach techbankNet Docker network to the Kafka service in the compose file.
https://docs.docker.com/compose/networking/
Also see Connect to Kafka running in Docker

Apache Camel: Help in understanding usage of SimpleScheduledRoutePolicy

Anybody has experience or know how to use SimpleScheduledRoutePolicy to manage a scheduler at runtime? To change the scheduler timing, suspend or resume. I tried using this in a spring boot project but it is not working as documented. Below is my Route configuration and a Test class I used to test this.
#Component
public class MyRouter extends RouteBuilder {
private static final Logger logger = LoggerFactory.getLogger(MyRouter.class);
#Override
public void configure() throws Exception {
SimpleScheduledRoutePolicy simpleScheduledRoutePolicy = new SimpleScheduledRoutePolicy();
long startTime = System.currentTimeMillis() + 5000l;
simpleScheduledRoutePolicy.setRouteStartDate(new Date(startTime));
simpleScheduledRoutePolicy.setRouteStartRepeatInterval(3000l);
simpleScheduledRoutePolicy.setRouteStartRepeatCount(3);
from("direct:myroute")
.routeId("myroute")
.routePolicy(simpleScheduledRoutePolicy)
.autoStartup(true)
.log(LoggingLevel.INFO, logger, "myroute invoked")
.to("mock:test");
}
}
Test class code
#RunWith(CamelSpringBootRunner.class)
#SpringBootTest
public class MyRouterTest {
#Autowired
private CamelContext camelContext;
#EndpointInject("mock:test")
MockEndpoint resultEndpoint;
#SneakyThrows
#Test
public void myRouteIsScheduledSuccessfully() {
resultEndpoint.expectedMessageCount(2);
Thread.sleep(7000);
resultEndpoint.assertIsSatisfied();
}
}
But I just gets below logs saying the scheduler started but it is not triggering every 3 seconds as configured in policy.
I tried to invoke the direct component from test method, still not working. Not sure where I am going wrong.
[INFO ] 2020-04-17 15:22:17.928 [main] MyRouterTest - Starting MyRouterTest on PPC11549 with PID 20892 (started by rmr in C:\Data\Telenet\Workspaces\atoms-event-engine)
[DEBUG] 2020-04-17 15:22:17.930 [main] MyRouterTest - Running with Spring Boot v2.2.6.RELEASE, Spring v5.2.5.RELEASE
[INFO ] 2020-04-17 15:22:17.932 [main] MyRouterTest - No active profile set, falling back to default profiles: default
[INFO ] 2020-04-17 15:22:19.634 [main] RepositoryConfigurationDelegate - Bootstrapping Spring Data JDBC repositories in DEFAULT mode.
[INFO ] 2020-04-17 15:22:19.679 [main] RepositoryConfigurationDelegate - Finished Spring Data repository scanning in 36ms. Found 0 JDBC repository interfaces.
[INFO ] 2020-04-17 15:22:20.226 [main] PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'org.apache.camel.spring.boot.CamelAutoConfiguration' of type [org.apache.camel.spring.boot.CamelAutoConfiguration] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
[INFO ] 2020-04-17 15:22:20.848 [main] HikariDataSource - HikariPool-1 - Starting...
[INFO ] 2020-04-17 15:22:21.437 [main] HikariDataSource - HikariPool-1 - Start completed.
[INFO ] 2020-04-17 15:22:22.602 [main] LRUCacheFactory - Detected and using LURCacheFactory: camel-caffeine-lrucache
[INFO ] 2020-04-17 15:22:24.082 [main] JobRepositoryFactoryBean - No database type set, using meta data indicating: H2
[INFO ] 2020-04-17 15:22:24.120 [main] SimpleJobLauncher - No TaskExecutor has been set, defaulting to synchronous executor.
[INFO ] 2020-04-17 15:22:24.485 [main] ThreadPoolTaskScheduler - Initializing ExecutorService 'taskScheduler'
[INFO ] 2020-04-17 15:22:24.695 [main] SpringBootRoutesCollector - Loading additional Camel XML routes from: classpath:camel/*.xml
[INFO ] 2020-04-17 15:22:24.698 [main] SpringBootRoutesCollector - Loading additional Camel XML rests from: classpath:camel-rest/*.xml
[INFO ] 2020-04-17 15:22:24.727 [main] MyRouterTest - Started MyRouterTest in 7.338 seconds (JVM running for 11.356)
[INFO ] 2020-04-17 15:22:24.729 [main] JobLauncherCommandLineRunner - Running default command line with: []
[INFO ] 2020-04-17 15:22:24.734 [main] CamelAnnotationsHandler - Setting shutdown timeout to [10 SECONDS] on CamelContext with name [camelContext].
[INFO ] 2020-04-17 15:22:24.815 [main] CamelSpringBootExecutionListener - #RunWith(CamelSpringBootRunner.class) before: class com.telenet.atoms.eventengine.camel.MyRouterTest.myRouteIsScheduledSuccessfully
[INFO ] 2020-04-17 15:22:24.817 [main] CamelSpringBootExecutionListener - Initialized CamelSpringBootRunner now ready to start CamelContext
[INFO ] 2020-04-17 15:22:24.818 [main] CamelAnnotationsHandler - Starting CamelContext with name [camelContext].
[INFO ] 2020-04-17 15:22:24.902 [main] DefaultManagementStrategy - JMX is enabled
[INFO ] 2020-04-17 15:22:25.308 [main] AbstractCamelContext - Apache Camel 3.2.0 (CamelContext: camel-1) is starting
[INFO ] 2020-04-17 15:22:25.438 [main] QuartzComponent - Create and initializing scheduler.
[INFO ] 2020-04-17 15:22:25.442 [main] QuartzComponent - Setting org.quartz.scheduler.jmx.export=true to ensure QuartzScheduler(s) will be enlisted in JMX.
[INFO ] 2020-04-17 15:22:25.483 [main] StdSchedulerFactory - Using default implementation for ThreadExecutor
[INFO ] 2020-04-17 15:22:25.487 [main] SimpleThreadPool - Job execution threads will use class loader of thread: main
[INFO ] 2020-04-17 15:22:25.511 [main] SchedulerSignalerImpl - Initialized Scheduler Signaller of type: class org.quartz.core.SchedulerSignalerImpl
[INFO ] 2020-04-17 15:22:25.511 [main] QuartzScheduler - Quartz Scheduler v.2.3.2 created.
[INFO ] 2020-04-17 15:22:25.516 [main] RAMJobStore - RAMJobStore initialized.
[INFO ] 2020-04-17 15:22:25.528 [main] QuartzScheduler - Scheduler meta-data: Quartz Scheduler (v2.3.2) 'DefaultQuartzScheduler-camel-1' with instanceId 'NON_CLUSTERED'
Scheduler class: 'org.quartz.core.QuartzScheduler' - running locally.
NOT STARTED.
Currently in standby mode.
Number of jobs executed: 0
Using thread pool 'org.quartz.simpl.SimpleThreadPool' - with 10 threads.
Using job-store 'org.quartz.simpl.RAMJobStore' - which does not support persistence. and is not clustered.
[INFO ] 2020-04-17 15:22:25.528 [main] StdSchedulerFactory - Quartz scheduler 'DefaultQuartzScheduler-camel-1' initialized from an externally provided properties instance.
[INFO ] 2020-04-17 15:22:25.528 [main] StdSchedulerFactory - Quartz scheduler version: 2.3.2
[INFO ] 2020-04-17 15:22:25.529 [main] AbstractCamelContext - StreamCaching is not in use. If using streams then its recommended to enable stream caching. See more details at http://camel.apache.org/stream-caching.html
[INFO ] 2020-04-17 15:22:25.623 [main] AbstractCamelContext - Route: myroute started and consuming from: direct://myroute
[INFO ] 2020-04-17 15:22:25.625 [main] AbstractCamelContext - Route: orderfault started and consuming from: direct://orderfault
[INFO ] 2020-04-17 15:22:25.637 [main] AbstractCamelContext - Total 2 routes, of which 2 are started
[INFO ] 2020-04-17 15:22:25.638 [main] AbstractCamelContext - Apache Camel 3.2.0 (CamelContext: camel-1) started in 0.329 seconds
[INFO ] 2020-04-17 15:22:25.663 [main] ScheduledRoutePolicy - Scheduled trigger: triggerGroup-myroute.trigger-START-myroute for action: START on route myroute
[INFO ] 2020-04-17 15:22:25.663 [main] ScheduledRoutePolicy - Scheduled trigger: triggerGroup-orderfault.trigger-START-orderfault for action: START on route orderfault
[INFO ] 2020-04-17 15:22:25.664 [main] QuartzComponent - Starting scheduler.
[INFO ] 2020-04-17 15:22:25.665 [main] QuartzScheduler - Scheduler DefaultQuartzScheduler-camel-1_$_NON_CLUSTERED started.
[INFO ] 2020-04-17 15:22:33.101 [main] MockEndpoint - Asserting: mock://test is satisfied
[INFO ] 2020-04-17 15:22:43.134 [main] MyRouterTest - ********************************************************************************
[INFO ] 2020-04-17 15:22:43.135 [main] MyRouterTest - Testing done: myRouteIsScheduledSuccessfully(com.telenet.atoms.eventengine.camel.MyRouterTest)
[INFO ] 2020-04-17 15:22:43.136 [main] MyRouterTest - Took: 17.469 seconds (17469 millis)
[INFO ] 2020-04-17 15:22:43.136 [main] MyRouterTest - ********************************************************************************
[INFO ] 2020-04-17 15:22:43.137 [main] CamelSpringBootExecutionListener - #RunWith(CamelSpringBootRunner.class) after: class com.telenet.atoms.eventengine.camel.MyRouterTest.myRouteIsScheduledSuccessfully
java.lang.AssertionError: mock://test Received message count. Expected: <2> but was: <0>
The SimpleScheduledRoutePolicy works as expected - it starts your Camel Route and that's what it is for: starting and stopping routes.
Because of your test, I guess that you want to have a Scheduler endpoint that triggers messages in a configured interval. For this you have to use Camel Timer, Camel Scheduler or Camel Quartz.
Since your route does not contain any of these, there is simply no scheduler that can be started.
To create a scheduler that (after waiting initially 5 seconds) fires every 3 seconds you can use for example this:
from("scheduler://foo?initialDelay=5s&delay=3s")...
Change at runtime (added due to comment)
The Camel Quartz component obviously uses the Quartz scheduler and Quartz provides a JMX access. So if you want to change the scheduler this is probably your best option.
To start and stop routes at runtime you should have a look at Camel Controlbus.

Spring Cloud Kafka Stream Unable to create Producer Config Error

I have two Spring boot project with Kafka-stream dependencies, they have exactly same dependencies in gradle and exactly same configurations, yet one of the project when started logs error as below
11:35:37.974 [restartedMain] INFO o.a.k.c.admin.AdminClientConfig - AdminClientConfig values:
bootstrap.servers = [192.169.0.109:6667]
client.id = client
connections.max.idle.ms = 300000
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 120000
retries = 5
retry.backoff.ms = 100
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
11:35:38.017 [restartedMain] INFO o.a.kafka.common.utils.AppInfoParser - Kafka version : 1.0.0
11:35:38.017 [restartedMain] INFO o.a.kafka.common.utils.AppInfoParser - Kafka commitId : aaa7af6d4a11b29d
11:35:38.136 [restartedMain] INFO o.s.c.s.b.k.p.KafkaTopicProvisioner - Using kafka topic for outbound: createschedule
11:36:08.147 [restartedMain] ERROR o.s.c.stream.binding.BindingService - Failed to create producer binding; retrying in 30 seconds
org.springframework.cloud.stream.provisioning.ProvisioningException: provisioning exception; nested exception is java.util.concurrent.TimeoutException
at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner.createTopic(KafkaTopicProvisioner.java:243)
at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner.provisionProducerDestination(KafkaTopicProvisioner.java:126)
at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner.provisionProducerDestination(KafkaTopicProvisioner.java:71)
at org.springframework.cloud.stream.binder.AbstractMessageChannelBinder.doBindProducer(AbstractMessageChannelBinder.java:140)
at org.springframework.cloud.stream.binder.AbstractMessageChannelBinder.doBindProducer(AbstractMessageChannelBinder.java:77)
at org.springframework.cloud.stream.binder.AbstractBinder.bindProducer(AbstractBinder.java:136)
at org.springframework.cloud.stream.binding.BindingService.doBindProducer(BindingService.java:244)
at org.springframework.cloud.stream.binding.BindingService.bindProducer(BindingService.java:221)
at org.springframework.cloud.stream.binding.BindableProxyFactory.bindOutputs(BindableProxyFactory.java:252)
at org.springframework.cloud.stream.binding.OutputBindingLifecycle.doStartWithBindable(OutputBindingLifecycle.java:46)
at java.util.Iterator.forEachRemaining(Iterator.java:116)
at java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
at java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580)
at org.springframework.cloud.stream.binding.AbstractBindingLifecycle.start(AbstractBindingLifecycle.java:47)
at org.springframework.cloud.stream.binding.OutputBindingLifecycle.start(OutputBindingLifecycle.java:29)
at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:181)
at org.springframework.context.support.DefaultLifecycleProcessor.access$200(DefaultLifecycleProcessor.java:52)
at org.springframework.context.support.DefaultLifecycleProcessor$LifecycleGroup.start(DefaultLifecycleProcessor.java:356)
at org.springframework.context.support.DefaultLifecycleProcessor.startBeans(DefaultLifecycleProcessor.java:157)
at org.springframework.context.support.DefaultLifecycleProcessor.onRefresh(DefaultLifecycleProcessor.java:121)
at org.springframework.context.support.AbstractApplicationContext.finishRefresh(AbstractApplicationContext.java:884)
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.finishRefresh(ServletWebServerApplicationContext.java:161)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:552)
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:140)
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:752)
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:388)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:327)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1246)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1234)
at com.und.ClientPanelApplicationKt.main(ClientPanelApplication.kt:21)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.boot.devtools.restart.RestartLauncher.run(RestartLauncher.java:49)
Caused by: java.util.concurrent.TimeoutException: null
at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:108)
at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:225)
at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner.createTopicAndPartitions(KafkaTopicProvisioner.java:271)
at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner.createTopicIfNecessary(KafkaTopicProvisioner.java:251)
at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner.createTopic(KafkaTopicProvisioner.java:236)
... 34 common frames omitted
I am using kotlin 2.30, and Spring Release 2.0.0.0
Relevant part of my build gradle is below
ext {
kotlinVersion = '1.2.30'
springBootVersion = '2.0.0.RELEASE'
springCloudVersion = 'Finchley.M8'
}
dependencies{
compile('org.springframework.cloud:spring-cloud-starter-stream-kafka')
}
I have defined one OutPut channel as below
interface EventStream {
#Output("createschedule")
fun outputEvent(): MessageChannel
}
Below is relvant section of my code where i have defined sender and listener
#Service
class TestService {
#Autowired
private lateinit var eventStream: EventStream
fun test() {
processSchedule("Hello")
}
#SendTo("createschedule")
fun processSchedule(campaign: String): String {
return campaign
}
#StreamListener("createschedule")
fun listenSchedule(campaign: String) {
println(campaign)
//return campaign
}
}
Below is relevant section of my application.yaml
spring:
cloud:
stream:
kafka:
binder:
brokers: ${KAFKA_IP}
defaultBrokerPort: ${KAFKA_PORT}
zkNodes: ${ZK_IP}
defaultZkPort: ${ZK_PORT}
bindings:
createschedule:
group: createscheduleGroup
destination: createschedule
consumer:
group: createscheduleGroup
kafka:
admin:
properties:
#security.protocol: SSL
client.id: client
As i already stated it throw error while starting and after it starts it keep throwing below error in logs
11:54:38.231 [pool-3-thread-1] INFO o.s.c.s.b.k.p.KafkaTopicProvisioner - Using kafka topic for outbound: createschedule
11:54:59.070 [kafka-admin-client-thread | client-panel] WARN o.apache.kafka.clients.NetworkClient - [AdminClient clientId=client-panel] Connection to node -1 could not be established. Broker may not be available.
11:55:08.234 [pool-3-thread-1] ERROR o.s.c.stream.binding.BindingService - Failed to create producer binding; retrying in 30 seconds
org.springframework.cloud.stream.provisioning.ProvisioningException: provisioning exception; nested exception is java.util.concurrent.TimeoutException
at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner.createTopic(KafkaTopicProvisioner.java:243)
at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner.provisionProducerDestination(KafkaTopicProvisioner.java:126)
at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner.provisionProducerDestination(KafkaTopicProvisioner.java:71)
at org.springframework.cloud.stream.binder.AbstractMessageChannelBinder.doBindProducer(AbstractMessageChannelBinder.java:140)
at org.springframework.cloud.stream.binder.AbstractMessageChannelBinder.doBindProducer(AbstractMessageChannelBinder.java:77)
at org.springframework.cloud.stream.binder.AbstractBinder.bindProducer(AbstractBinder.java:136)
at org.springframework.cloud.stream.binding.BindingService.lambda$rescheduleProducerBinding$2(BindingService.java:262)
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
at java.util.concurrent.Executors$RunnableAdapter.call$$$capture(Executors.java:511)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java)
at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:266)
at java.util.concurrent.FutureTask.run(FutureTask.java)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.util.concurrent.TimeoutException: null
at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:108)
at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:225)
at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner.createTopicAndPartitions(KafkaTopicProvisioner.java:271)
at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner.createTopicIfNecessary(KafkaTopicProvisioner.java:251)
at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner.createTopic(KafkaTopicProvisioner.java:236)
... 16 common frames omitted
It is the problem with kafka.
If you are using docker cointainers, then stop kafka, remove kafka and start kafka cointainer.

Flume does not write to HDFS from kafka topic

I am trying to read from Kafka topic and store it to HDFS as Flume sink and input data is JSON, following is my config file,
# components name
a1.sources = source1
a1.channels = channel1
a1.sinks = sink1
a1.sources.source1.type = org.apache.flume.source.kafka.KafkaSource
a1.sources.source1.zookeeperConnect = server1:port,server2:port,server3:port
a1.sources.source1.kafka.bootstrap.servers = server1:port,server2:port,server3:port
a1.sources.source1.kafka.topics = TOPIC
a1.sources.source1.kafka.consumer.group.id = flume
a1.sources.source1.channels = channel1
a1.sources.source1.interceptors = i1
a1.sources.source1.interceptors.i1.type = timestamp
a1.sources.source1.kafka.consumer.timeout.ms = 100
a1.channels.channel1.type = memory
a1.channels.channel1.capacity = 1
a1.channels.channel1.transactionCapacity = 1
a1.sinks.sink1.type = hdfs
a1.sinks.sink1.hdfs.path = /path/kafka/
a1.sinks.sink1.hdfs.rollInterval = 1
a1.sinks.sink1.hdfs.rollSize = 1
a1.sinks.sink1.hdfs.rollCount = 1
a1.sinks.sink1.hdfs.fileType = DataStream
a1.sinks.sink1.channel = channel1
a1.sinks.sink1.hdfs.idleTimeout = 10
and here is Flume-NG command
flume-ng agent --conf-file /path/kafka_hdfs_sink.conf --conf /path/flume/conf/ --name a1;
Following are logs after loading all dependencies,
18/01/30 12:15:24 INFO node.PollingPropertiesFileConfigurationProvider: Configuration provider starting
18/01/30 12:15:24 INFO node.PollingPropertiesFileConfigurationProvider: Reloading configuration file:/home/fk010191/kafka/kafka_hdfs_sink.conf
18/01/30 12:15:24 INFO conf.FlumeConfiguration: Processing:sink1
18/01/30 12:15:24 INFO conf.FlumeConfiguration: Processing:sink1
18/01/30 12:15:24 INFO conf.FlumeConfiguration: Processing:sink1
18/01/30 12:15:24 INFO conf.FlumeConfiguration: Added sinks: sink1 Agent: a1
18/01/30 12:15:24 INFO conf.FlumeConfiguration: Processing:sink1
18/01/30 12:15:24 INFO conf.FlumeConfiguration: Processing:sink1
18/01/30 12:15:24 INFO conf.FlumeConfiguration: Processing:sink1
18/01/30 12:15:24 INFO conf.FlumeConfiguration: Processing:sink1
18/01/30 12:15:24 INFO conf.FlumeConfiguration: Processing:sink1
18/01/30 12:15:24 INFO conf.FlumeConfiguration: Post-validation flume configuration contains configuration for agents: [a1]
18/01/30 12:15:24 INFO node.AbstractConfigurationProvider: Creating channels
18/01/30 12:15:24 INFO channel.DefaultChannelFactory: Creating instance of channel channel1 type memory
18/01/30 12:15:24 INFO node.AbstractConfigurationProvider: Created channel channel1
18/01/30 12:15:24 INFO source.DefaultSourceFactory: Creating instance of source source1, type org.apache.flume.source.kafka.KafkaSource
18/01/30 12:15:24 ERROR node.AbstractConfigurationProvider: Source source1 has been removed due to an error during configuration
org.apache.flume.conf.ConfigurationException: Kafka topic must be specified.
at org.apache.flume.source.kafka.KafkaSource.doConfigure(KafkaSource.java:183)
at org.apache.flume.source.BasicSourceSemantics.configure(BasicSourceSemantics.java:65)
at org.apache.flume.source.AbstractPollableSource.configure(AbstractPollableSource.java:63)
at org.apache.flume.conf.Configurables.configure(Configurables.java:41)
at org.apache.flume.node.AbstractConfigurationProvider.loadSources(AbstractConfigurationProvider.java:331)
at org.apache.flume.node.AbstractConfigurationProvider.getConfiguration(AbstractConfigurationProvider.java:102)
at org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:140)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
18/01/30 12:15:24 INFO sink.DefaultSinkFactory: Creating instance of sink: sink1, type: hdfs
18/01/30 12:15:24 INFO hdfs.HDFSEventSink: Hadoop Security enabled: false
18/01/30 12:15:24 INFO node.AbstractConfigurationProvider: Channel channel1 connected to [sink1]
18/01/30 12:15:24 INFO node.Application: Starting new configuration:{ sourceRunners:{} sinkRunners:{sink1=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor#4c601ec4 counterGroup:{ name:null counters:{} } }} channels:{channel1=org.apache.flume.channel.MemoryChannel{name: channel1}} }
18/01/30 12:15:24 INFO node.Application: Starting Channel channel1
18/01/30 12:15:24 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: CHANNEL, name: channel1: Successfully registered new MBean.
18/01/30 12:15:24 INFO instrumentation.MonitoredCounterGroup: Component type: CHANNEL, name: channel1 started
18/01/30 12:15:24 INFO node.Application: Starting Sink sink1
18/01/30 12:15:24 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: SINK, name: sink1: Successfully registered new MBean.
18/01/30 12:15:24 INFO instrumentation.MonitoredCounterGroup: Component type: SINK, name: sink1 started
when i stop after a while, i see as below but it doesnot write anything to HDFS
^C18/01/30 12:21:16 INFO lifecycle.LifecycleSupervisor: Stopping lifecycle supervisor 23
18/01/30 12:21:16 INFO node.PollingPropertiesFileConfigurationProvider: Configuration provider stopping
18/01/30 12:21:16 INFO instrumentation.MonitoredCounterGroup: Component type: SINK, name: sink1 stopped
18/01/30 12:21:16 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: SINK, name: sink1. sink.start.time == 1517336124877
18/01/30 12:21:16 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: SINK, name: sink1. sink.stop.time == 1517336476542
18/01/30 12:21:16 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: SINK, name: sink1. sink.batch.complete == 0
18/01/30 12:21:16 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: SINK, name: sink1. sink.batch.empty == 46
18/01/30 12:21:16 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: SINK, name: sink1. sink.batch.underflow == 0
18/01/30 12:21:16 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: SINK, name: sink1. sink.connection.closed.count == 0
18/01/30 12:21:16 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: SINK, name: sink1. sink.connection.creation.count == 0
18/01/30 12:21:16 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: SINK, name: sink1. sink.connection.failed.count == 0
18/01/30 12:21:16 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: SINK, name: sink1. sink.event.drain.attempt == 0
18/01/30 12:21:16 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: SINK, name: sink1. sink.event.drain.sucess == 0
18/01/30 12:21:16 INFO instrumentation.MonitoredCounterGroup: Component type: CHANNEL, name: channel1 stopped
18/01/30 12:21:16 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: CHANNEL, name: channel1. channel.start.time == 1517336124874
18/01/30 12:21:16 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: CHANNEL, name: channel1. channel.stop.time == 1517336476543
18/01/30 12:21:16 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: CHANNEL, name: channel1. channel.capacity == 1
18/01/30 12:21:16 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: CHANNEL, name: channel1. channel.current.size == 0
18/01/30 12:21:16 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: CHANNEL, name: channel1. channel.event.put.attempt == 0
18/01/30 12:21:16 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: CHANNEL, name: channel1. channel.event.put.success == 0
18/01/30 12:21:16 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: CHANNEL, name: channel1. channel.event.take.attempt == 46
18/01/30 12:21:16 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: CHANNEL, name: channel1. channel.event.take.success == 0
when i run following command - bin/kafka-console-consumer.sh --bootstrap-server server1:port,server2:port,server3:port --topic TOPIC --from-beginning, i can see some data from the topic...but Flume is not writing anything to HDFS. Any help is greatly appreciated.

Spark Streaming is not streaming,but waits showing consumer-config values

I am trying to stream data using spark-streaming-kafka-0-10_2.11 artifact and 2.1.1 version along with spark-streaming_2.11 version 2.1.1 and kafka_2.11(0.10.2.1 version).
When I start the program, Spark is not streaming data and neither it is throwing any error.
Below is my code, dependencies, and response.
Dependencies:
"org.apache.kafka" % "kafka_2.11" % "0.10.2.1",
"org.apache.spark" % "spark-streaming-kafka-0-10_2.11" % "2.1.1",
"org.apache.spark" % "spark-streaming_2.11" % "2.1.1"
Code:
val sparkConf = new SparkConf().setMaster("local[*]").setAppName("kafka-dstream").set("spark.driver.allowMultipleContexts", "true")
val sc = new SparkContext(sparkConf)
val ssc = new StreamingContext(sc, Seconds(5))
val kafkaParams = Map[String, Object](
"bootstrap.servers" -> "ec2-xx-yy-zz-abc.us-west-2.compute.amazonaws.com:9092",
"key.deserializer" -> classOf[StringDeserializer],
"value.deserializer" -> classOf[StringDeserializer],
"group.id" -> ("java-test-consumer-" + random.nextInt()+"-" +
System.currentTimeMillis()),
"auto.offset.reset" -> "latest",
"enable.auto.commit" -> (false: java.lang.Boolean)
)
val topics = Array("test")
val stream = KafkaUtils.createDirectStream[String, String](
ssc,
PreferConsistent,
Subscribe[String, String](topics, kafkaParams)
)
stream.foreachRDD(f=>f.foreachPartition(f=>f ->{
while (f.hasNext) {
System.out.println(">>>>>>>>>>>>>>>>>>>>>>>>>>>"+f.next().key());
}
}))
ssc.start()
ssc.awaitTermination()
Response (Logs):
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0
[info] Loading project definition from D:\Scala_Workspace\LM\project
[info] Set current project to LM (in build file:/D:/Scala_Workspace/LM/)
[info] Running consumer.One
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
17/06/09 12:43:44 INFO SparkContext: Running Spark version 2.1.1
17/06/09 12:43:44 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/06/09 12:43:44 INFO SecurityManager: Changing view acls to: ee208961
17/06/09 12:43:44 INFO SecurityManager: Changing modify acls to: ee208961
17/06/09 12:43:44 INFO SecurityManager: Changing view acls groups to:
17/06/09 12:43:44 INFO SecurityManager: Changing modify acls groups to:
17/06/09 12:43:44 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(ee208961); groups wi
permissions: Set(ee208961); groups with modify permissions: Set()
17/06/09 12:43:45 INFO Utils: Successfully started service 'sparkDriver' on port 51695.
17/06/09 12:43:45 INFO SparkEnv: Registering MapOutputTracker
17/06/09 12:43:45 INFO SparkEnv: Registering BlockManagerMaster
17/06/09 12:43:45 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
17/06/09 12:43:45 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
17/06/09 12:43:45 INFO DiskBlockManager: Created local directory at C:\Users\EE208961\AppData\Local\Temp\blockmgr-7f3c43f0-d1f5-43fb-b185-a5bf78de4496
17/06/09 12:43:45 INFO MemoryStore: MemoryStore started with capacity 93.3 MB
17/06/09 12:43:45 INFO SparkEnv: Registering OutputCommitCoordinator
17/06/09 12:43:45 INFO Utils: Successfully started service 'SparkUI' on port 4040.
17/06/09 12:43:46 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://10.1.22.70:4040
17/06/09 12:43:46 INFO Executor: Starting executor ID driver on host localhost
17/06/09 12:43:46 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 51706.
17/06/09 12:43:46 INFO NettyBlockTransferService: Server created on 10.1.22.70:51706
17/06/09 12:43:46 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
17/06/09 12:43:46 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 10.1.22.70, 51706, None)
17/06/09 12:43:46 INFO BlockManagerMasterEndpoint: Registering block manager 10.1.22.70:51706 with 93.3 MB RAM, BlockManagerId(driver, 10.1.22.70, 51706,
17/06/09 12:43:46 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 10.1.22.70, 51706, None)
17/06/09 12:43:46 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 10.1.22.70, 51706, None)
17/06/09 12:43:46 WARN KafkaUtils: overriding enable.auto.commit to false for executor
17/06/09 12:43:46 WARN KafkaUtils: overriding auto.offset.reset to none for executor
17/06/09 12:43:46 WARN KafkaUtils: overriding executor group.id to spark-executor-sadad
17/06/09 12:43:46 WARN KafkaUtils: overriding receive.buffer.bytes to 65536 see KAFKA-3135
17/06/09 12:43:47 INFO DirectKafkaInputDStream: Slide time = 5000 ms
17/06/09 12:43:47 INFO DirectKafkaInputDStream: Storage level = Serialized 1x Replicated
17/06/09 12:43:47 INFO DirectKafkaInputDStream: Checkpoint interval = null
17/06/09 12:43:47 INFO DirectKafkaInputDStream: Remember interval = 5000 ms
17/06/09 12:43:47 INFO DirectKafkaInputDStream: Initialized and validated org.apache.spark.streaming.kafka010.DirectKafkaInputDStream#5cda65c2
17/06/09 12:43:47 INFO ForEachDStream: Slide time = 5000 ms
17/06/09 12:43:47 INFO ForEachDStream: Storage level = Serialized 1x Replicated
17/06/09 12:43:47 INFO ForEachDStream: Checkpoint interval = null
17/06/09 12:43:47 INFO ForEachDStream: Remember interval = 5000 ms
17/06/09 12:43:47 INFO ForEachDStream: Initialized and validated org.apache.spark.streaming.dstream.ForEachDStream#1b9c49ce
17/06/09 12:43:47 INFO ConsumerConfig: ConsumerConfig values:
metric.reporters = []
metadata.max.age.ms = 300000
partition.assignment.strategy = [org.apache.kafka.clients.consumer.RangeAssignor]
reconnect.backoff.ms = 50
sasl.kerberos.ticket.renew.window.factor = 0.8
max.partition.fetch.bytes = 1048576
bootstrap.servers = [ec2-54-69-23-196.us-west-2.compute.amazonaws.com:9092]
ssl.keystore.type = JKS
enable.auto.commit = false
sasl.mechanism = GSSAPI
interceptor.classes = null
exclude.internal.topics = true
ssl.truststore.password = null
client.id =
ssl.endpoint.identification.algorithm = null
max.poll.records = 2147483647
check.crcs = true
request.timeout.ms = 40000
heartbeat.interval.ms = 3000
auto.commit.interval.ms = 5000
receive.buffer.bytes = 65536
ssl.truststore.type = JKS
ssl.truststore.location = null
ssl.keystore.password = null
fetch.min.bytes = 1
send.buffer.bytes = 131072
value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
group.id = sadad
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.trustmanager.algorithm = PKIX
ssl.key.password = null
fetch.max.wait.ms = 500
sasl.kerberos.min.time.before.relogin = 60000
connections.max.idle.ms = 540000
session.timeout.ms = 30000
metrics.num.samples = 2
key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
ssl.protocol = TLS
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.keystore.location = null
ssl.cipher.suites = null
security.protocol = PLAINTEXT
ssl.keymanager.algorithm = SunX509
metrics.sample.window.ms = 30000
auto.offset.reset = latest
17/06/09 12:43:47 INFO AppInfoParser: Kafka version : 0.10.2.1
17/06/09 12:43:47 INFO AppInfoParser: Kafka commitId : a7a17cdec9eaa6c5
The code starts waiting here,neither it throws error,nor streams.
And what else i observed is,if i don't use "org.apache.kafka" % "kafka_2.11" % "0.10.2.1" dependency, am getting
"Exception in thread "main" org.apache.kafka.common.protocol.types.SchemaException: Error reading field 'topic_metadata': Error reading array of size 291941, only 32 bytes available" error,
so i used kafka_2.11(0.10.2.1 version) dependency.

Resources