Kafka from MSSQL no topics created using JDBCSourceConnector - jdbc

I have connected Kafka with MSSQl using JDBC connector. The connector has been successfully connected and the status is running. But when i curled port http://ip:8083/topics I am getting 404 not found error "error_code":404,"message":"HTTP 404 Not Found"}.
What could be the reason for this?
This is the connector configuration.
{"name":"test-mssql-source","config":{"connector.class":"io.confluent.connect.jdbc.JdbcSourceConnector","mode":"incrementing","incrementing.column.name":"id","topic.prefix":"test-mssql-","tasks.max":"1","poll.interval.ms":"100","name":"test-mssql-source","connection.url":"jdbc:sqlserver://ip;Database=TEST_KAFKA;user=user;password=root","value.converter":"org.apache.kafka.connect.json.JsonConverter"},"tasks":[{"connector":"test-mssql-source","task":0}],"type":"source"}

You're confusing Kafka Connect with the Confluent REST Proxy.
Connect API has no /topics endpoint
And if you did curl the REST Proxy, you would at least get an empty array when there are no topics

Related

how to configure axon server client with remote server host? (without localhost)

I'm new to Axon server. I tried to work with Axon server with spring boot.
I installed the axon server on one of my cloud instances. when I run the spring boot application, the application finds the local Axon server. but there is no local one in my case.
I couldn't find a method to configure the IP address in the property file. if you know how to configure the remote host of the Axon server in Spring boot application please help me to do it.
The error like below,
Requesting connection details from localhost:8124
Connecting to AxonServer node [localhost:8124] failed: UNAVAILABLE: io exception
Failed to get connection to AxonServer. Scheduling a reconnect in 2000ms
Thanks.
To configure the location of Axon Server, add the following property to the application.properties file:
axon.axonserver.servers=<hostname/ip address>:<port>
If you are running Axon Server on the default port, you can omit the port number.

Spring boot connect to alibaba e-mapreduce kafka

I'm trying to connect spring boot kafka app to kafka on alibaba cloud.
The cloud is on e-mapreduce service.
However, I can't connect from boot, maybe due to some security credential that I need to provide?
I've already tried to set the boot properties as follows:
spring.kafka.properties.security.protocol=SSL
Get error : Connection to node -1 (/xx.xx.xx.xx:9092) terminated during authentication. This may happen due to any of the following reasons: (1) Authentication failed due to invalid credentials with brokers older than 1.0.0, (2) Firewall blocking Kafka TLS traffic (eg it may only allow HTTPS traffic), (3) Transient network issue.
spring.kafka.properties.security.protocol=SASL_SSL
Throws Caused by: java.lang.IllegalArgumentException: Could not find a 'KafkaClient' entry in the JAAS configuration. System property 'java.security.auth.login.config' is not set
Anybody has experience connect to kafka on alibaba cloud?
I believe Kafka Connect could solve your problems of connect spring boot kafka app to kafka on Alibaba cloud:
Step 1: Create Kafka clusters
Create a source Kafka cluster and a target Kafka cluster in E-MapReduce.
Step 2: Create a topic for storing the data to be migrated
Create a topic named connect in the source Kafka cluster.
Step 3: Create a Kafka Connect connector
Use Secure Shell (SSH) to log on to the header node of the source Kafka cluster.
Optional:Customize Kafka Connect configuration.
Step 4: View the status of the Kafka Connect connector and task node
View the status of the Kafka Connect connector and task node and make sure that they are in normal status.
Follow other steps as your job needs are.
Detail instructions may be find on Use Kafka Connect to migrate data link: https://www.alibabacloud.com/help/doc-detail/127685.htm
Hope this will help you,

JDBC Kafka Connect with DB2

I'm struggling to get Confluent's kafka connector to connect to DB2.
I am running an ubuntu instance inside docker for testing pruposes. The solution needs to be deployed to kubernetes, so docker it is.
I have installed the Confluent platform using apt-get and adding their repos. All services are running, kafka, zookeeper, schema and kafka rest.
I have created my kafka connect properties file as described in this article: https://www.progress.com/blogs/build-an-etl-pipeline-with-kafka-connect-via-jdbc-connectors
I assumed that this will work the same for DB2. The step I'm missing in the above tutorial is this one:
java -jar PROGRESS_DATADIRECT_JDBC_POSTGRESQL_ALL.jar
I tried to run it like this:
java -jar /usr/share/java/kafka-connect-jdbc/db2jcc.jar
I get this error:
no main manifest attribute, in /usr/share/java/kafka-connect-jdbc/db2jcc.jar
I proceeded anyway, but of course I get an error:
No suitable driver found for jdbc:datadirect:db2://db2-server:50000;User=db2admin;Password=pwd;Database=test_db
This is my command to start the connector:
/usr/bin/connect-standalone /etc/kafka/connect-standalone.properties /etc/kafka-connect-jdbc/db2.properties
This is my properties file:
name=test-db2-jdbc
connector.class=io.confluent.connect.jdbc.JdbcSourceConnector
tasks.max=1
connection.url=jdbc:datadirect:db2://db2-server:50000;User=db2admin;Password=pwd;Database=test_db
mode=timestamp+incrementing
incrementing.column.name=id
timestamp.column.name=modified_time
topic.prefix=test_jdbc_
table.whitelist=data_log
I am sure I'm close. I just need to get the DB2 driver to register inside java or for kafka connect to pick it up and be able to use it.
I have tried other values for connector.class, but if I change that to the name of the class as it would be in other Java apps, I get this error:
java.util.concurrent.ExecutionException: org.apache.kafka.connect.errors.ConnectException: Class com.ibm.db2.jcc.DB2Jcc does not implement Connector
Any help or suggestions will be appreciated.
I am the author of the tutorial that you mentioned, I just noticed this thread and I see that you are using IBM supplied DB2 driver(db2cc.jar) with DataDirect IBM DB2 connection string(jdbc:datadirect:db2://db2-server:50000;User=db2admin;Password=pwd;Database=test_db), which is why as soon as you changed the connection string to IBM supplied driver, you were able to connect properly.

Connecting to multiple AMQP brokers

I am trying to setup a HA amqp client. There are currently 3 amqps brokers. Currently my client config is as below :
<property name="remoteURI" value="amqps://node1:9551?jms.username=XXXXXXXX&jms.password=XXXXXXXXX&transport.trustStoreLocation=etc/keystore.jks" />
Now since I have 2 other AMQP brokers too, im trying to connect to them too. Firstly is it possible ? According to documentation, for python I can try something like :
connection = qpid.messaging.Connection.establish("node1", reconnect=True, reconnect_urls=["node1", "node2", "node3"])
But for JMS related connection, it states :
connectionfactory.qpidConnectionfactory = amqp://guest:guest#clientid/test?brokerlist='tcp://localhost:5672'&failover='failover_exchange'
But I dont see any indication on how to connect to other brokers.
Any idea how this can be achieved from client side ?
My assumption is that you are using the QpidJMS AMQP v1.0 based client from the Apache Qpid project since you've not given any other information to make a better guess. The way that client would handle failover configuration is on the connection URI, something like:
failover://(amqp://host1:5672,amqp://host2:5672)?jms.username....
You can of course find these things out from reading the documentation.

Does camel-elasticsearch 2.11.x not work remotely?

I am using the camel elasticsearch component : http://camel.apache.org/elasticsearch.html
My assumption, based on the docs, is that the elasticsearch server must be on the same network as the running camel route in order to work. Is this correct?
To clarify, the only connection property available is 'clustername'. I assume this is discovered by searching the network via multicast for the cluster.
My code needs to connect to a remote service. Is this just not possible?
I am fairly new to elasticsearch in general.
I had a similar problem with the autodiscovery of elasticsearch. I had a camel route that tried to index some exchanges, but the cluster was located in another subnet and thus not discoverd.
With the java api of ES it is possible to connect to a remote cluster with a TransportClient specifying an IP adress. I don't have acces to the code at the moment but the Java API in the ES documentation provides clean example code. You could make such a connection from within a bean in the route for example.
I also submitted a patch to Camel to add an ip parameter to the route, which should then connect to the remote cluster with such a TransportClient. The documentation states that should be available with Camel 2.12
Hope this helps.

Resources