I'm trying to provide a remote declarative service using Eclipse ECF.
In the service provider component definition, I have set the following properties:
service.exported.interfaces = *
service.exported.configs = ecf.generic.server
ecf.generic.server.id = ecftcp://localhost:3787/server
However I can't figure out how to discover the service using these properties from the consumer side.
(I want to be able to provide the service from one osgi environment and discover it from another osgi.)
Discovery will run automatically if you have specified "service.exported.interfaces". For this to work you need to have the ECF discovery bundles installed in both your targets. On top of this you have to choose a discovery provider. Like SLP, JmDNS or Zookeeper.
Please note that some discovery providers need additional properties to run correctly. For example if you have a double network you need to specify in the JmDNS provider on which network the discovery has to listen.
Make sure that the ECF distribution bundle is started. This does not start automatically.
Related
I am trying to introduce Pact in our company. However the consumer calls APIs using providers SDKs and the host-port is dynamically determined using Kubernetes. I am new to all this backend technology so trying to understand how do we deal with this since it will be impossible to get host/port into pom.xml if its dynamic?
It depends whether you're talking about port of the mock service in the consumer tests, or the port of the provider in the verification step.
In the consumer tests, is it possible to provide a test implementation of the part of the SDK that looks up the port? Perhaps you could contact the provider team to see they could supply one that would allow you to set a known port?
In regards to the provider, you would typically run the verification step against a locally running provider in the CI build, not against one deployed into a live environment, so a known port should be able to be used.
I want to make an application like JConsole. Is it possible? If yes, what are the changes need to done at JVM level? I am planning to use Spring-Boot. As per my knowledge, JMX is enabled by default. Do I need to configure anything extra in my Spring-Boot app in order to access the JMXBeans which are exposed by default?
Here I'm not trying to expose any MBean instead I'm trying to access those beans which are already exposed by JVM. How to achieve it?
JConsole is a JMX compliant monitoring and management application. The architecture is quite simple. It's a client-server architecture. Where the client is the Remote application (Example JConsole or the one that you want to build) and the server is the JMX Agent. In your case, you want to build your own client which is possible.
I want to make an application like JConsole. Is it possible?
Yes, it is possible.
If yes, what are the changes need to done at JVM level?
What do you mean by changes at JVM level? You are simply creating a client application that connects to the server (JMX Agent) using a certain protocol. Remote Method Invocation (RMI) is the protocol used by JConsole for the connection to the JMX Agent. If you want to use RMI for communication, you don't have to do anything on the server side. But if you want to use some other protocol for communication, you can define your own Protocol adapter.
As per my knowledge, JMX is enabled by default.
As of Java SE 6 it is. But you can only monitor it locally. For connection from a remote machine, you need to define an RMI port to start listening for incoming connections.
Here I'm not trying to expose any MBean instead I'm trying to access those beans which are already exposed by JVM. How to achieve it?
Please check out the example from this link - Mimicking Out-of-the-Box Management Using the JMX Remote API. It shows you how to create a simple client application that connects to a remote JMX agent and access the MBeans. This should guide you in the right direction.
In order to make an OSGI service to be remotely accessible, I'd like to use ActiveMQ JMS broker as a distribution provider inside ECF framework. Which steps should I take?
I'll be answering my own question in order to document it.
Get a minimal working ECF remote service workspace, with ecf.generic.server as the distribution and one of the discovery providers (zookeeper for example). You can use http://wiki.eclipse.org/EIG:Getting_Started_with_OSGi_Remote_Services tutorial.
Install an ActiveMQ broker with default configuration.
Download the JMS/ActiveMQ ECF providers from https://github.com/ECF/JMS. Add the org.eclipse.ecf.provider.jms and org.eclipse.ecf.provider.jms.activemq projects in your workspace, build and add them to your run configuration or target platform.
For the provider service properties, do the following changes:
a. Change the "service.exported.configs" property for as "ecf.jms.activemq.tcp.client"
b. Add "ecf.endpoint.connecttarget.id" property with the following value "tcp://[ACTIVEMQ_IP]:61616/rs_topic", replacing [ACTIVEMQ_IP] with the broker IP. You can also change the topic name as something related to your service.
Now we also need a ActiveMQ JMS Server Container. Add the following code to the provider side. Use the Activator or fire up a new component with DS. You can also get an IContainerFactory object from the service registry.
IContainerFactory containerFactory = ContainerFactory.getDefault();
containerFactory.createContainer("ecf.jms.activemq.tcp.manager",
new Object[] { "tcp://[ACTIVEMQ_IP]:61616/rs_topic" });
I can monitor individual SPRING INTEGRATION applications via visualvm changing the command line parameters when starting the JVM (-Dcom.sun.....)
My application has components in multiple jvm's, each of which i can name.
I would like my operational console to connect per server to one JMX service via one port. Then as I add JVM's(services) they are discoverable by the operational console(lets assume its visualvm) by name.
Any help is greatly appreciated
Take a look at Jolokia I believe a single client can connect to multiple agents (you install a jolokia agent on each JVM).
If:
I have a bundle I wish to run on n OSGi containers exporting some service;
I am using DS to register the modified method for when configuration changes, so I can update the service via ConfigurationAdmin, and to export the interfaces remotely as per RFC119;
I am using Discovery to call those services from other bundles on other boxes,
is it possible to have a central configuration for this service using ConfigurationAdmin, so that I can publish a configuration change via the Configuration Admin and it is received by all instances of the service running?
It seems from everything that I have read that ConfigurationAdmin is not network aware, and is local to each OSGi container?
Thanks for your insight in advance :)
So your bundle runs on N containers, exports its service to that local container only, and it exports ManagedService using remote services to publish it to some "central" container that has ConfigurationAdmin running?
You are right that ConfigurationAdmin is not network aware, but if the bundle remotely publishes its ManagedService to that container running Configuration Admin it should work. The only caveat is that each ManagedService must have a unique service PID so you cannot simply publish the same bundle in N containers unless you ensure that each instance ends up using a unique PID.
You should probably check out Karaf Cellar. It provides cluster support for OSGi applications and synchronizes configuration changes across nodes.