Configuring Debezium MySQL connector via env vars - apache-kafka-connect

The only way to configure a Debezium connector (MySQL in my case) is to send a config to a running Kafka Connect instance via HTTP.
My question is: is it possible to supply this configuration when starting the Connect instance? Via a properties file or (ideally) via env vars?..

If you execute a connector worker in standalone mode, you can supply configuration via command line (see details here):
bin/connect-standalone worker.properties connector1.properties [connector2.properties connector3.properties ...]
For distributed mode, you can only use the REST API. But you can do some automation using tools like Ansible.

Related

How can I interact with a Corda node via RPC, using curl?

Hope all are safe and well! I asked this question on Slack but was suggested I ask here.
I have a Corda 4.3 compatibility zone setup using the bootstrapper, and I have setup my node.conf file user section as below:
rpcUsers = [
{
username=user1,
password=password1,
permissions=[ ALL ]
}
]
My RPC settings are:
rpcSettings {
address="localhost:10201"
adminAddress="localhost:10202"
}
And I can see that the port is open:
# nc -v localhost 10201
localhost (127.0.0.1:10201) open
^Cpunt!
My questions are:
is it possible to connect to a Corda node and execute API commands using RPC?
by API commands I mean the same as if I was connecting to Corda shell, is this the case?
Thanks,
Viv
SSH is disabled, by default, you could enable it with the below settings in node.conf file.
sshd {
port = <portNumber>
}
Once enabled you could connect to the node using SSH and execute all the command that could normally execute from the node's shell.
Use the below command to connect to the node:
ssh -p [portNumber] [host] -l [user]
For more details on node shell refer the docs here: https://docs.corda.net/docs/corda-os/4.4/shell.html
You can create a SpringBoot webserver like in this example:
You create an RPC connection; which uses the RPC user credential that you identified in your node.conf.
The RPC connection gets injected into the controller where you define your API's.
The injected RPC connection exposes a proxy that you can use for many things including starting flows and querying the vault. Have a look at the StandardController example to see various RPC interactions with the node. You can add your own API's to the template CustomController.
The webserver is a simple SpringBoot application.
When you start the webserver with this Gradle task, it will inject the RPC connection into the controller and expose your API's on the port that you supply in application.properties file.
Now that the webserver is running, you can call your API's either using CURL or Postman.

How to change the "kafka connect" component port?

On port 8083 I am running Influxdb for which I am even getting the GUI on http://localhost:8083
Now come to kafka, Here I am referring the setup as per https://kafka.apache.org/quickstart
starting the zookeeeper which is in folder /opt/zookeeper-3.4.10 by the command: bin/zkServer.sh start
So zookeeper is started now starting kafka under /opt/kafka_2.11-1.1.0 folder as :
bin/kafka-server-start.sh config/server.properties
create a topic named "test" with a single partition and only one replica:
bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
Topic is created and can be checked in with command :
bin/kafka-topics.sh --list --zookeeper localhost:2181
Uptill here everything is fine and tuned.
Now I need to use "Kafka connect" component to import/export data.
So I am creating a seed data as: echo -e "foo\nbar" > test.txt
Now using connector configuration for "kafka connect" to work :
bin/connect-standalone.sh config/connect-standalone.properties config/connect-file-source.properties config/connect-file-sink.properties
after running above command I am getting : Address already in use
Kafka connect has stopped
I even changed the rest.port=8084 in the /opt/kafka_2.11-1.1.0/config/connect-distributed.properties so as it don't get conflict with influxdb which already on 8083. Still I am getting the same Address already in use,
Kafka connect has stopped as shown in screenshots.
Since you're using Kafka Connect in Standalone mode, you need to change the REST port in config/connect-standalone.properties:
rest.port=18083
To understand more about Standalone vs Distributed you can read the doc here.
Kafka Standalone mode, uses Port 8084 as the Rest API post, by default. Due to this reason, if someone else is using that port already, the process with throw a BindException.
To change the port used above, navigate to the config/connect-standalone.properties file in the Kafka Root directory.
Add the following key value property to change the Port being used for Rest API opening. (Kafka should have included this in the properties file by default, else many developers go nuts trying to find the port mapping used in the standalone mode). Put a different port as you wish.
rest.port=11133
Kafka 3.0.0
Since Kafka Connect is intended to be run as a service, it also provides a REST API for managing connectors. The REST API server can be configured using the listeners configuration option. This field should contain a list of listeners in the following format: protocol://host:port,protocol2://host2:port2. Currently supported protocols are http and https.
For example: listeners= http://localhost:8080,https://localhost:8443
By default, if no listeners are specified, the REST server runs on port 8083 using the HTTP protocol.
More details: https://kafka.apache.org/documentation/#connect_rest
Change the port definition in config/server.properties:
# The port the socket server listens on
port=9092

Specifying an http proxy with spring-boot

How do I specify a http proxy to use when running a spring-boot fat war as a tomcat server?
I have tried the following which is not working.
java -jar my-application.war --http.proxyHost=localhost --http.proxyPort=3128 --https.proxyHost=localhost --https.proxyPort=3128
and
java -jar my-application.war -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128 -Dhttps.proxyHost=localhost -Dhttps.proxyPort=3128
I've found that I need -Dhttps.proxySet=true in order for the proxy config to actually be used.
Put the JVM options before -jar. This should work:
java -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128 -Dhttps.proxyHost=localhost -Dhttps.proxyPort=3128 -jar my-application.war
Explanation
According to java command-line documentation, the command's syntax is:
java [ options ] -jar file.jar [ arguments ]
The arguments are the args that'll be received in your main(String[] args). So, it's totally your responsibility to use them somehow. And if you forward them to spring using SpringApplication.run(MyApplication.class, args);, then you need to find documentation that says how spring uses args in the run method.
The options, however, are not sent to your app. One of their uses is to set what java calls system properties using -Dproperty=value. According to Java Networking and Proxies, setting, e.g., http.proxyHost property makes the JVM proxy all your http request through that host.
You may configure all property of REMOTE DEVTOOLS (RemoteDevToolsProperties) in application.properties.
spring.devtools.remote.context-path= # Context path used to handle the remote connection.
spring.devtools.remote.proxy.host= # The host of the proxy to use to connect to the remote application.
spring.devtools.remote.proxy.port= # The port of the proxy to use to connect to the remote application.
spring.devtools.remote.restart.enabled=true # Whether to enable remote restart.
spring.devtools.remote.secret= # A shared secret required to establish a connection (required to enable remote support).
spring.devtools.remote.secret-header-name=X-AUTH-TOKEN # HTTP header used to transfer the shared secret.
need to add for authenticating proxy server
-Dhttp.proxyUser=**username**
-Dhttp.proxyPassword=**password**

Issue in connecting kafka from outside

I am using hortonwork Sandbox for kafka server
trying to connect kafka from eclipse with java code .
Use this configuration to connect to producer to send the message
metadata.broker.list=sandbox.hortonworks.com:45000
serializer.class=kafka.serializer.DefaultEncoder
zk.connect=sandbox.hortonworks.com:2181
request.required.acks=0
producer.type=sync
where sandbox.hortonworks.com is sandboxname to whom i connect
in kafka server.properties I changed this configuration
host.name=sandbox.hortonworks.com
advertised.host.name=System IP(on which my eclipse is running)
advertised.port=45000
did the port forwarding also ,
I am able to connect to kafka server from eclipse but while sending the message get the exception
Exception"Failed to send messages after 3 tries."
First make sure you have configured host-only network for your Hortonworks Sandbox VM as described here:
http://hortonworks.com/community/forums/topic/use-host-only-networking-for-the-virtual-machine/
After doing this your sandbox VM should get a IP (e.g. 192.168.56.101) and it should be reachable from your host via SSH like
$ ssh root#192.168.56.101
Then open Ambari at http://192.168.56.101:8080/ and change the Kafka configuration to
listeners=PLAINTEXT://0.0.0.0:6667
advertised.listeners=PLAINTEXT://192.168.56.101:6667
The latter property must be added in the section "Custom kafka-broker" (See also http://hortonworks.com/community/forums/topic/ambari-alerts-how-to-change-kafka-port/).
Then start/restart Kafka via Ambari. You should now be able to access Kafka from outside the Hortonworks Sandbox VM. You can test this (from outside of the sandbox VM) using e.g. the Kafka console producer from the Kafka distribution like
$ bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
$ bin/kafka-console-producer.sh --topic test --broker-list 192.168.56.101:6667
After almost one week of config tweeking, I finally get it work. Similar to asmaier's answer but if you are using cloud server like me: Azure Sandbox-hdp, try to pub/sub thru remote consumer/producer.
In azure:
first SSH into your azure
in Ambari Web-UI localhost:8080, add
listeners=PLAINTEXT://0.0.0.0:6667
advertised.listeners=PLAINTEXT://127.0.0.1:6667
in terminal #root, Set up docker port forwarding like hortonwroks instruction page sandbox
vi start_scripts/start_sandbox.sh
add port 6667 on the list
On your PC:
first SSH into your azure plus tunneling 6667.
then write in cmd: or your run own java/c# script
kafka\bin\windows>kafka-console-producer --broker-list localhost:9092 --topic test
Only thing that bothers me right now is I cant find a way to make kafka push messages directly to a real public IP/ Azure. It seems Data traffic thru broker can only operate inside the docker internally.

Connecting to a weblogic cluster via JPDA

I have a weblogic cluster set up across two machines in a staging environment. I'd like to set up JPDA on at least one of the weblogic instances so I can debug remotely. Generally I use wlst.sh and jython scripts to startup the cluster via:
startNodeManager(...)
nmConnect(...)
nmStart(MyAdminServer)
connect(...) #connect to the admin server
start(...) #start the cluster
Where should I put the -Xdebug Xrunjdwp:transport... incantation so that I can attach to one of the weblogic instances? I had no problem setting this up on a single instance through my domain's startWebLogic.sh, but it doesn't seem to work with the cluster.
From here: https://forums.oracle.com/forums/thread.jspa?threadID=2233816 it looks like I want to put the debug string in startManagedWeblogic.sh but that doesn't seem to work with my jython script either.
Figured it out, from http://docs.oracle.com/cd/E13222_01/wls/docs90/server_start/nodemgr.html#1081870:
4. The Administration Server obtains the domain configuration from its config directory
I checked out the config directory under my domain and there was a suspicious file called config.xml. Within this file is the configuration for the weblogic nodes and where you want to put your JDPA config:
<server>
<name>my-target-machine</name>
...
<server-start>
...
<arguments>(your other config stuff) -Xrunjdwp:transport=dt_socket,address=4000,server=y,suspend=n</arguments>

Resources