I am new to SCDF and am trying to get started with a RabbitMQ transport layer and SCDF version 1.2.2. I have setup RabbitMQ in a separate VM and have the SCDF local server and SCDF shell jar in one VM. Can someone suggest how I can specify the server details of my RabbitMQ (which is in a different host in the same network) for SCDF to use as a transport.
For reasons outside my control I need to use the MQ setup in a different machine. Please advise.
SCDF doesn't require RabbitMQ and I think you are trying to use RabbitMQ as the binder for your Spring Cloud Stream applications that are orchestrated via SCDF.
You would need to configure the properties mentioned here
You can find more information here on how to specify these properties at SCDF.
Related
I have a Spring Boot (2.3.3) service using spring-kafka to currently access a dedicated Kafka/Zookeeper configuration. I have been using the application.properties setting spring.kafka.bootstrap-servers=localhost:9092 to access my dev/test Apache Kafka service.
However, in production, we have a Cluster of Kafka Brokers (on many servers) configured in Zookeeper, and I have been asked to modify my service to query Zookeeper to get the list of brokers and use that list instead of the bootstrap servers configuration. Reason, our DevOps folks have been known to reconfigure servers/nodes and Kafka brokers.
Basically, I have been asked to make my service agnostic to where the Apache Kafka brokers are running. All my service needs to know is how to get the list of brokers (bootstrap server info including host and port) from Zookeeper.
Is there a way in spring-boot and spring-kafka to retrieve from Zookeeper the broker list and use that broker (aka bootstrap server) list in my service?
Spring delegates to the kafka-clients for all connections; for a long time now, the kafka-clients no longer connect to Zookeeper, only to the brokers themselves.
There is no built-in support in Spring for querying the Zookeeper to determine the broker list.
Furthermore, in a future Kafka version, Zookeeper is going away altogether; see KIP-500.
I'm trying to find examples of kafka connect with springboot. It looks like there is no spring boot integration for kafka connect. Can some one point me in the right direction to be able to listen to changes on mysql db?
Kafka Connect doesn't really need Spring Boot because there is nothing for you to code for it, and it really works best when ran in distributed mode, as a cluster, not embedded within other (single-instance) applications. I suppose if you did want to do it, then you could copy relevent portions of the source code, but that of course isn't using Spring Boot, and you'd have to wire it all yourself
The framework itself consists of a few core Java dependencies that have already been written (Debezium or Confluent JDBC Connector, for your mysql example), and two config files. One for Kafka Connect to know the bootstrap servers, serializers, etc. and another for the actual MySQL connector. So, if you want to use Kafka Connect, run it by itself, then just write the consumer in the Spring app.
The alternatives to Kafka Connect itself would be to use Apache Camel within a Spring application (Spring Integration) or Spring Cloud Dataflow and interfacing with those Kafka "components" (which aren't using the Connect API, AFAIK)
Another option, specific for listening to MySQL, is to use Debezium Engine within your code.
I have integrated JMS using ActiveMQ in one of my Mule application. I want to deploy it in cloudhub.
Could you please help me for the following queries:
For deploying the application with ActiveMQ configured JMS does it required anything groundwork to be done before deployment? (such as ActiveMQ is to be installed and configured for my CH account?)
For time being I have configured the ActiveMQ which is already installed in OnPremise server and is being used from cloudHub deployed application. Is it a proper or standard way to use externally installed ActiveMQ?
Appreciate the quick and best answer for the above queries.
Thank you,
Best Regards,
Krishna.
you have already installed MQ service on your server side, you can use those credentials to configure your mule MQ adapter through mule properties file same as like you are using with local runtime
e.g.
mq.host=
mq.port=
mq.vhost=
mq.username=
mq.password=
CloudHub will connect to your on premise MQ service. Your approach is correct and no any MQ specific groundwork required.
I can monitor individual SPRING INTEGRATION applications via visualvm changing the command line parameters when starting the JVM (-Dcom.sun.....)
My application has components in multiple jvm's, each of which i can name.
I would like my operational console to connect per server to one JMX service via one port. Then as I add JVM's(services) they are discoverable by the operational console(lets assume its visualvm) by name.
Any help is greatly appreciated
Take a look at Jolokia I believe a single client can connect to multiple agents (you install a jolokia agent on each JVM).
I am working on Spring XD and GemFire XD. I want to understand how Spring XD's distributed environment works. I know spring xd uses either redis or rabittmq as the transport.
I am clear about this, I have install spring xd and rabittmq on one machine. I changed the redis.properties file and added hostnames.
Do I need to install spring xd on all the machines? If so, after installing, how to bring those up.
On the master machine, I will do ./xd-admin and ./xd-container
How do you start up the nodes (spring xd instances/workers) so that they can listen for instructions from xd-admin?
Please help me on this.
Thanks,
-Suyodhan
Redis is used for analytics as only supported platform. For transport, you need either Redis or Rabbit.
Basically you just need to install Redis and RabbitMQ per their respective documentation. They can be in same or different servers, Ideally you would use their high availability option. For example Redis Sentinal. YOu don't need RabbitMQ unless you want to change the default transport from Redis to Rabbit. Once you install Redis and Rabbit, bring them up and provide their host:port info (and any additional as applicable) to the servers.yml in XD install (in all nodes) and bring up admin and containers. Evrything should work automatically by using zookeeper as the means to manage the distributed runtime.
If you use Spring XD in distributed mode, I assume you have set up zookeeper as well. (If not check this http://docs.spring.io/spring-xd/docs/1.0.0.M7/reference/html/#_setting_up_zookeeper )
Admin and Container instances register themselves with Zookeeper as they come up. Admin queries zookeeper for available containers and assign tasks like deploying modules. Zookeeper is the trick behind Distributed mode.
Hope this helps.
You will install Spring xd one time on one machine, Spring XD will be connected to your hdfs distributed scaled out environment.
You need to start the followings:
1. redis or rappitMQ in your case
2. hsqldb server
3. container
4. admin
when you start spring xd, you need to register the name node firstly using the command:
hadoop config fs --name hdfs://serverip:8020
then you can use any module defined in spring xd (using stream or batch) by specifying its parameters directly without specifying those in the server.yml file.
Moha.