I'm trying to connect spring boot kafka app to kafka on alibaba cloud.
The cloud is on e-mapreduce service.
However, I can't connect from boot, maybe due to some security credential that I need to provide?
I've already tried to set the boot properties as follows:
spring.kafka.properties.security.protocol=SSL
Get error : Connection to node -1 (/xx.xx.xx.xx:9092) terminated during authentication. This may happen due to any of the following reasons: (1) Authentication failed due to invalid credentials with brokers older than 1.0.0, (2) Firewall blocking Kafka TLS traffic (eg it may only allow HTTPS traffic), (3) Transient network issue.
spring.kafka.properties.security.protocol=SASL_SSL
Throws Caused by: java.lang.IllegalArgumentException: Could not find a 'KafkaClient' entry in the JAAS configuration. System property 'java.security.auth.login.config' is not set
Anybody has experience connect to kafka on alibaba cloud?
I believe Kafka Connect could solve your problems of connect spring boot kafka app to kafka on Alibaba cloud:
Step 1: Create Kafka clusters
Create a source Kafka cluster and a target Kafka cluster in E-MapReduce.
Step 2: Create a topic for storing the data to be migrated
Create a topic named connect in the source Kafka cluster.
Step 3: Create a Kafka Connect connector
Use Secure Shell (SSH) to log on to the header node of the source Kafka cluster.
Optional:Customize Kafka Connect configuration.
Step 4: View the status of the Kafka Connect connector and task node
View the status of the Kafka Connect connector and task node and make sure that they are in normal status.
Follow other steps as your job needs are.
Detail instructions may be find on Use Kafka Connect to migrate data link: https://www.alibabacloud.com/help/doc-detail/127685.htm
Hope this will help you,
Related
I am trying to send data from Kafka to Ignite using Ignite Sink Connector. I have done little experiments :
When I am running Kafka and Ignite on same machine locally with connector, I am able to send the data. - In this case I have provided xml configuration file for ignite in connector.properties which includes CacheName and Discovery property.
When I am trying to run them on remote node and connector on Kafka server's node it is unable to push the data even if I change IP in discovery property. - In this case I am running ignite with xml configuration on other node with that node's ip via terminal shell script.
When I am running kafka and ignite on remote node's but connector on ignite side, it's able to pull from kafka and push into cache.
I am very new to Ignite. Please help me out with these doubts.
I am using same xml configuration file which comes with ignite setup called example-cache.xml
Why is it so?
Ideally, On which side worker and connector should run , Kafka or Ignite ? If I want to make them at kafka server only? What changes that I need to do?
Have I mistaken something to configure in xml configuration? If Yes, what should be the configurations that I should made in my ignite server xml file and in xml file which I pass in connector?
I have a Spring Boot (2.3.3) service using spring-kafka to currently access a dedicated Kafka/Zookeeper configuration. I have been using the application.properties setting spring.kafka.bootstrap-servers=localhost:9092 to access my dev/test Apache Kafka service.
However, in production, we have a Cluster of Kafka Brokers (on many servers) configured in Zookeeper, and I have been asked to modify my service to query Zookeeper to get the list of brokers and use that list instead of the bootstrap servers configuration. Reason, our DevOps folks have been known to reconfigure servers/nodes and Kafka brokers.
Basically, I have been asked to make my service agnostic to where the Apache Kafka brokers are running. All my service needs to know is how to get the list of brokers (bootstrap server info including host and port) from Zookeeper.
Is there a way in spring-boot and spring-kafka to retrieve from Zookeeper the broker list and use that broker (aka bootstrap server) list in my service?
Spring delegates to the kafka-clients for all connections; for a long time now, the kafka-clients no longer connect to Zookeeper, only to the brokers themselves.
There is no built-in support in Spring for querying the Zookeeper to determine the broker list.
Furthermore, in a future Kafka version, Zookeeper is going away altogether; see KIP-500.
I have connected Kafka with MSSQl using JDBC connector. The connector has been successfully connected and the status is running. But when i curled port http://ip:8083/topics I am getting 404 not found error "error_code":404,"message":"HTTP 404 Not Found"}.
What could be the reason for this?
This is the connector configuration.
{"name":"test-mssql-source","config":{"connector.class":"io.confluent.connect.jdbc.JdbcSourceConnector","mode":"incrementing","incrementing.column.name":"id","topic.prefix":"test-mssql-","tasks.max":"1","poll.interval.ms":"100","name":"test-mssql-source","connection.url":"jdbc:sqlserver://ip;Database=TEST_KAFKA;user=user;password=root","value.converter":"org.apache.kafka.connect.json.JsonConverter"},"tasks":[{"connector":"test-mssql-source","task":0}],"type":"source"}
You're confusing Kafka Connect with the Confluent REST Proxy.
Connect API has no /topics endpoint
And if you did curl the REST Proxy, you would at least get an empty array when there are no topics
I want to use ZooKeeper in order to synchronize my distributed services via ZooKeeper ephemeral nodes.
The idea is the following - every node in the topology on the startup will create ZooKeeper session and ephemeral nodes. On the node restart or failure, these nodes will disappear.
I'm going to implement it using Spring Boot. Right now I'm in doubt what project and Maven dependency to use in order to have ZooKeeper client autoconfiguration, be able to create ZooKeeper session on the application startup, be able to create from this client - ZooKeeper ephemeral nodes and use ZooKeeper transactions.
Right now I'm looking on Spring Cloud Zookeeper/ but I'm not sure is it a right one for this purpose. Could you please point me to the right Spring Boot ZooKeeper project and show the small example how to achieve that I have described above.
I have a streaming use case to develop an Spring boot application where it should read data from kafka topic and put into hdfs path, I got two distinct cluster for kafka and hadoop.
Application worked fine without having kerberos authentication in kafka cluster and hadoop being kerberized.
Issues started when both cluster being kerberized, At the same time i could only authenticate into only one cluster.
I did few analysis/googling , i could not find much of help,
My theory is we could not login/authenticate into two kerberized cluster at same jvm instance because we need to set REALM and KDC details in code which are not client specific but jvm specific,
It might happen that i did not used proper APIs, I am very new to Spring boot.
I know we can do this by setting cross realm trust between clusters but i am looking for application level solutions if possible.
I got few questions
is it possible to login/authenticate two separate kerberized cluster at same jvm instance, if possible? please help me, use of Spring boot is preferred.
What would be the best solution to stream data from kafka cluster to hadoop cluster.
What would be the best solution to stream data from kafka cluster to hadoop cluster.
Kafka's Connect API is for streaming integration of sources and targets with Kafka, using just configuration files - no coding! The HDFS connector is what you want, and supports Kerberos authentication. It is open source and available standalone or as part of Confluent Platform.