I can monitor individual SPRING INTEGRATION applications via visualvm changing the command line parameters when starting the JVM (-Dcom.sun.....)
My application has components in multiple jvm's, each of which i can name.
I would like my operational console to connect per server to one JMX service via one port. Then as I add JVM's(services) they are discoverable by the operational console(lets assume its visualvm) by name.
Any help is greatly appreciated
Take a look at Jolokia I believe a single client can connect to multiple agents (you install a jolokia agent on each JVM).
Related
I am new to SCDF and am trying to get started with a RabbitMQ transport layer and SCDF version 1.2.2. I have setup RabbitMQ in a separate VM and have the SCDF local server and SCDF shell jar in one VM. Can someone suggest how I can specify the server details of my RabbitMQ (which is in a different host in the same network) for SCDF to use as a transport.
For reasons outside my control I need to use the MQ setup in a different machine. Please advise.
SCDF doesn't require RabbitMQ and I think you are trying to use RabbitMQ as the binder for your Spring Cloud Stream applications that are orchestrated via SCDF.
You would need to configure the properties mentioned here
You can find more information here on how to specify these properties at SCDF.
I want to make an application like JConsole. Is it possible? If yes, what are the changes need to done at JVM level? I am planning to use Spring-Boot. As per my knowledge, JMX is enabled by default. Do I need to configure anything extra in my Spring-Boot app in order to access the JMXBeans which are exposed by default?
Here I'm not trying to expose any MBean instead I'm trying to access those beans which are already exposed by JVM. How to achieve it?
JConsole is a JMX compliant monitoring and management application. The architecture is quite simple. It's a client-server architecture. Where the client is the Remote application (Example JConsole or the one that you want to build) and the server is the JMX Agent. In your case, you want to build your own client which is possible.
I want to make an application like JConsole. Is it possible?
Yes, it is possible.
If yes, what are the changes need to done at JVM level?
What do you mean by changes at JVM level? You are simply creating a client application that connects to the server (JMX Agent) using a certain protocol. Remote Method Invocation (RMI) is the protocol used by JConsole for the connection to the JMX Agent. If you want to use RMI for communication, you don't have to do anything on the server side. But if you want to use some other protocol for communication, you can define your own Protocol adapter.
As per my knowledge, JMX is enabled by default.
As of Java SE 6 it is. But you can only monitor it locally. For connection from a remote machine, you need to define an RMI port to start listening for incoming connections.
Here I'm not trying to expose any MBean instead I'm trying to access those beans which are already exposed by JVM. How to achieve it?
Please check out the example from this link - Mimicking Out-of-the-Box Management Using the JMX Remote API. It shows you how to create a simple client application that connects to a remote JMX agent and access the MBeans. This should guide you in the right direction.
I want to expose some custom application metrics like how many records processed to JMX in mulesoft flow.
My application is spring boot application.
Any idea on how to achieve this?
Thanks & Regards,
Vikas Gite
Using the Default JMX Support Agent
You can configure several JMX agents simultaneously using the element. When set, this element registers the following agents:
List item
JMX agent
RMI registry agent (if necessary) on rmi://localhost:1099
Remote JMX access on service:jmx:rmi:///jndi/rmi://localhost:1099/server
(Optional) Log4J JMX agent, which exposes the configuration of the Log4J instance used by Mule for JMX management
JMX notification agent used to receive server notifications using JMX notifications
(Optional) MX4J adapter, which provides web-based JMX management, statistics, and configuration viewing of a Mule instance
Also you can try below which can directly be configured under the MULE's wrapper.conf (under $MULE_HOME/conf/wrapper.conf) to enable JMX connectivity so that we can use tools like JMC (Java Mission Control), VisualVM, JConsole to pull the JMX attribute values from the Mule Server and the deployed applications -
wrapper.java.additional.<n>=-Dcom.sun.management.jmxremote
wrapper.java.additional.<n>=-Dcom.sun.management.jmxremote.port=9999
wrapper.java.additional.<n>=-Dcom.sun.management.jmxremote.authenticate=false
wrapper.java.additional.<n>=-Dcom.sun.management.jmxremote.local.only=false
wrapper.java.additional.<n>=-Dcom.sun.management.jmxremote.ssl=false
wrapper.java.additional.<n>=-Djava.rmi.server.hostname=<localhost ip>
wrapper.java.additional.<n>=-Djava.rmi.activation.port=9998
Here should be removed with the next incremental value of the java wrapper value and will be the ip of the the Mule runtime host/server
I hope you are looking for this, if not then please post further queries or clarity so that i will help you out.
I want to setup a network of brokers because I need to serve a number of users at the same time. I discovered that I may use either embedded brokers and start every broker in a Java Code or download full apache activemq distribution and run multiple instances.
For the moment, I don't have any specific reason to use embedded brokers. But on the other hand I don't have any reasons against using embedded brokers. Could you please give a hint what may be real disadvantages of using embedded brokers?
Thanks, Cheers
You probably want stand-alone broker(s).
Embedded brokers (in an application context) are usually used inside an application server to provide for fast response/low latency to application code in the running application server. The embedded broker would then store-and-forward the messages to another broker or interested clients. Other use cases including using embedded broker in unit tests, or within an embedded IOT-style computer.
I am working on Spring XD and GemFire XD. I want to understand how Spring XD's distributed environment works. I know spring xd uses either redis or rabittmq as the transport.
I am clear about this, I have install spring xd and rabittmq on one machine. I changed the redis.properties file and added hostnames.
Do I need to install spring xd on all the machines? If so, after installing, how to bring those up.
On the master machine, I will do ./xd-admin and ./xd-container
How do you start up the nodes (spring xd instances/workers) so that they can listen for instructions from xd-admin?
Please help me on this.
Thanks,
-Suyodhan
Redis is used for analytics as only supported platform. For transport, you need either Redis or Rabbit.
Basically you just need to install Redis and RabbitMQ per their respective documentation. They can be in same or different servers, Ideally you would use their high availability option. For example Redis Sentinal. YOu don't need RabbitMQ unless you want to change the default transport from Redis to Rabbit. Once you install Redis and Rabbit, bring them up and provide their host:port info (and any additional as applicable) to the servers.yml in XD install (in all nodes) and bring up admin and containers. Evrything should work automatically by using zookeeper as the means to manage the distributed runtime.
If you use Spring XD in distributed mode, I assume you have set up zookeeper as well. (If not check this http://docs.spring.io/spring-xd/docs/1.0.0.M7/reference/html/#_setting_up_zookeeper )
Admin and Container instances register themselves with Zookeeper as they come up. Admin queries zookeeper for available containers and assign tasks like deploying modules. Zookeeper is the trick behind Distributed mode.
Hope this helps.
You will install Spring xd one time on one machine, Spring XD will be connected to your hdfs distributed scaled out environment.
You need to start the followings:
1. redis or rappitMQ in your case
2. hsqldb server
3. container
4. admin
when you start spring xd, you need to register the name node firstly using the command:
hadoop config fs --name hdfs://serverip:8020
then you can use any module defined in spring xd (using stream or batch) by specifying its parameters directly without specifying those in the server.yml file.
Moha.