Need to install compendium to install EventAdmin? - osgi

I have an application which receives high volume events (some metrics data) from web applications. I have a non-osgi application that receives these events and is responsible for forwarding these events to osgi bundle. I am trying to use EventAdmin for this communication. I looked at EventAdmin is null in maven-osgi project but does not clearly answer a few questions.
I want to install the EventAdmin service (at non-osgi application level). I am using "org.osgi.service.event.EventAdmin". However, there is no separate jar for this. It is part of compendium jar. Do I need to install compendium jar?
If I don't do above, the reference I get back from this call is always null:
ServiceReference ref = context.getServiceReference(EventAdmin.class.getName());
Any pointers are highly appreciated
Thanks
Masti

Event admin is for communication inside one OSGi framework. You can not use it outside of OSGi or for communication between OSGi containers. For your case I recommend to use jms for communication or if this is too slow then RabbitMQ or HornetQ may work. There is also work in progress to define remote events for OSGi but it is not yet available. One possible combination of event admin and jms might be to receive jms in one bundle and forward the event over event admin. So your business code bundle can abstract from jms.

Related

How to implement microservice Saga in google cloud platform

I am investigating solution to implement microservice Saga pattern in platform hosted in K8S in GCP.
There are 2 options: Eventulate Tram and Axon. However, these frameworks seem not to support message broker managed by cloud provider such as google-cloud-Pubsub whereas I do not want to deploy either Kafka or RabbitMQ to K8S since GCP support PubSub already.
So is there any way to integrate either Eventulate or Axon to use google cloud PubSub?
Thanks
Uncertain about Eventuate's angle on this, but Axon works with extensions as message brokers other than Axon Server. Throughout Axon's lifecycle (read: last 10 years), some of these have been provided, but none are currently used for all types of messages defined by Axon Framework. So, you wouldn't be able to use Kafka for sending commands in Axon for example.
Reasoning for this? Commands, events and queries have different routing requirements which should be reflected by using the right tool for the job.
To be a bit more specific on Axon's side, the following extensions can be used for distributing your messages:
AMQP -> for Events
Kafka -> for Events
JGroups -> for Commands
Spring Cloud Discovery -> for Commands
As you can tell, there currently is no Pub/Sub extension out there to allow you to distribute your messages. Added on top of that, my gut would tell me if it was available, then it would likely only be used for Event messages due to Pub/Sub's intent when it comes to being a message broker.
Luckily this actually makes it rather straightforward to create just such a extension yourself. Going into all the details to build this would be a little much, so I would recommend to have a look at Axon's AMQP extension first when it comes to achieving this. Hints on the matter are that for publication, you should add a component to handle Axon's events and publish them on Pub/Sub. For handling events, you are required to build a StreamableMessageSource or SubscribableMessageSource. These interfaces are used respectively by the TrackingEventProcessor and SubscribingEventProcessor, which in turn are the component in charge of dealing with the technical aspect of handling events.
By the way, if you would be building such an extension and you need a hand, it would be best to request this at AxonIQ forum, which you can find here.
Last note, and rather important I'd say, is the argument that such a connector would not be able to deal with all types of messages. If you would require a more full fledged Axon application to run in a distributed fashion, I would highly recommend to give Axon Server a try prior to building your own solution from the ground up.

How to upload Spring Boot application using RabbitMQ messaging to AWS EC2?

I have a functioning application using Spring Boot, Rabbit MQ & MySQL DB locally. I'm curious, how I can upload this app to the AWS Environment and get it working seamlessly.
The only part where I'm lost is how to get RabbitMQ in the cloud? Any suggestions?
I see three options for your needs :
Use AmazonMQ managed service. This uses ActiveMQ under the hood, and supports the AMQP protocol (so you can continue to use the RabbitMQ client). Here's an article on how to do it : https://aws.amazon.com/blogs/compute/migrating-from-rabbitmq-to-amazon-mq/.
Use a third-party managed service (such as CloudAMQP). This is similar to the first option, but you can choose a RabbitMQ provider if you wish.
Install RabbitMQ on an EC2 instance and manage it yourself. This is the most flexible option, but it will require more effort on your part and it will probably cost more. I would recommend this option only if you have special requirements that are not met by using a hosted service.
In all cases, I would also recommend to use a messaging library such as Spring Messaging or Apache Camel to isolate your code from your messaging implementation. This will reduce the boilerplate code you need for messaging and allows you to focus on your application logic.

Differences in bundle containers between Websphere and Karaf?

I'm evaluating options for my team's middleware. We really have a frankenstein'd setup. We're using Apache ServiceMix(Karaf/ActiveMQ/CXF), Websphere 8.5, ActiveMQ where we don't really need it, and all of our applications are not really coded to failover to another node if the primary goes down. We realized the issues of our setup and now want to improve.
We currently host bundles (not sure if they're all OSGI compliant) in a Karaf Container, which are used via ActiveMQ after being sent JMS messages via Apache Camel from Websphere.
My current idea is to kill off ActiveMQ, make all the camel routes towards HTTP (instead of JMS queues), and convert our data bundles/services to serve via HTTP through Apache CXF (replacing websphere for some things) and not ActiveMQ Queues/JMS. However, we have Websphere licenses, and I do know that it supports bundles in some way, I'm just not as familiar as to how it does (same nature as karaf)?
The main question is in the title, and I hope it's not too generic.
WebSphere 8.5 is a full OSGi container supporting Blueprint just as Karaf does.
You can, in theory, run your camel bundles or whatnot just as fine in WAS8.5. However, Apache Karaf is a lot more aligned towards running ActiveMQ/CXF/Camel stuff than WebSphere will ever be. Installation in Karaf is a few commands, where installation and configuration struggle in WAS85 for the Camel feauters and basic camel routes is .. well, a headace when I tried it last time. Others seems to have the same struggle.
I have rather good experience of running Camel apps inside WebSphere Application Server, but that was by embedding Camel in a standard WebApp, not using the OSGi stuff. So, embedded web apps is my recommendation for running Camel inside WebSphere.
For the "replace AMQ/JMS with HTTP" part. You are aware of that you are replacing pears with apples, right? JMS has a lot of features HTTP does not have (and some overhead compared to HTTP). For the sake of completeness, WebSphere also has a JMS provider built in. So if you have a large HA secured WebSphere infrastructure, the WebSphere (SIBus) provider might be a good choice. Otherwise, ActiveMQ rocks :-)

How are ESB Clients typically deployed

In playing with JBoss ESB, I have been looking through the quickstarts, mostly the publish-subscribe models since that is what we will be implementing. To run the subscribers, you just run specific ant targets. My question is, how are ESB subscriber clients typically installed and run? Would I just write a simple Java class with a main method (like all the examples show) and run that on the ESB server? (Well not me, but the admin of the server).
I'm more used to dealing with webapps so not just copying an ear or war over to the deployment directory is throwing me for a loop.
The first ant target you run to perform these tasks on the ESB are deploying a pub-sub.esb artifact, instead of a .war artifact, into the JBoss ESB deploy directory. Then another target is run, which is invoking a java app which places a message on the JMS queue/topic, which is consumed by the deployed ESB action pipeline. The main method is most likely the piece of code which places the message onto the JMS topic. The artifacts deployed to the ESB are not run via a main method. ESB artifacts are started by the JBoss environment, and are run by invocation of the action. In your example, the ESB action pipeline is a subscriber listening to the JMS queue, and your class with the main method is just a convenient way of placing a message onto the queue.
Without knowing which quickstart you are running, and which version of JBoss ESB you are running against, this is about the most insight I can give you into this action pipeline.
An ESB is Message Oriented Middleware. The purpose of which is to act as an intermediary point of integration between two or more information systems. A common use for an ESB is to provide multiple interfaces for integration to a system. Assuming you have some application which is a subscriber to an existing queue/topic, you could easily use an ESB to expose a web service to external clients, and have the ESB act as a pass through, transforming the SOAP or REST request into a JMS message, placing it on the queue/topic and either waiting for a response, or generating a response, and transforming it back into a SOAP or REST response.

How to configure Spring-DM OSGi service for new instance per call?

I'm starting to delve into using Spring DM and OSGi services in an RCP application. I've created a service which is used by another bundle in the RCP application. It does a lookup of the service via calls to getBundleContext().getServiceReference() using the explicit bundle names and service class names. I'm not using DI anywhere yet. The issue I'm running into is that the service that is returned in the requesting bundle is a singleton. At times I notice a threading issue since it is a "stateful" service. How do I configure the application to get back a new service instance with each call?
Here is my spring xml file contents which registers the service:
<bean id="myServBean" class="com.xyz.ClassImpl"/>
<osgi:service ref="myServBean" class="com.xyz.Class"/>
OSGi services in general can be called concurrently by multiple clients. The only thing OSGi supports out of the box is the use of a ServiceFactory, which allows you to return a different instance to each invoking client bundle. There is no standard mechanism to create a new instance per method call. You would have to handle that in your service implementation yourself.

Resources