In playing with JBoss ESB, I have been looking through the quickstarts, mostly the publish-subscribe models since that is what we will be implementing. To run the subscribers, you just run specific ant targets. My question is, how are ESB subscriber clients typically installed and run? Would I just write a simple Java class with a main method (like all the examples show) and run that on the ESB server? (Well not me, but the admin of the server).
I'm more used to dealing with webapps so not just copying an ear or war over to the deployment directory is throwing me for a loop.
The first ant target you run to perform these tasks on the ESB are deploying a pub-sub.esb artifact, instead of a .war artifact, into the JBoss ESB deploy directory. Then another target is run, which is invoking a java app which places a message on the JMS queue/topic, which is consumed by the deployed ESB action pipeline. The main method is most likely the piece of code which places the message onto the JMS topic. The artifacts deployed to the ESB are not run via a main method. ESB artifacts are started by the JBoss environment, and are run by invocation of the action. In your example, the ESB action pipeline is a subscriber listening to the JMS queue, and your class with the main method is just a convenient way of placing a message onto the queue.
Without knowing which quickstart you are running, and which version of JBoss ESB you are running against, this is about the most insight I can give you into this action pipeline.
An ESB is Message Oriented Middleware. The purpose of which is to act as an intermediary point of integration between two or more information systems. A common use for an ESB is to provide multiple interfaces for integration to a system. Assuming you have some application which is a subscriber to an existing queue/topic, you could easily use an ESB to expose a web service to external clients, and have the ESB act as a pass through, transforming the SOAP or REST request into a JMS message, placing it on the queue/topic and either waiting for a response, or generating a response, and transforming it back into a SOAP or REST response.
Related
I am trying to work through a solution where the workflow is like this:
User hits a microservice to upload images
That microservice de-duplicates the image and if it really is new, queues it up for processing
The processing chain lives in Spring Cloud Dataflow
The microservice already exists, and we are trying to extend it to do the fancy processing. My initial cut was to use the Http Source from the sample starter pack since that would be something I didn't have to create. The problem is that the source doesn't register itself with Spring Discovery server, so there is no way to get an end point without making gross assumptions (like it lives on the dataflow server at port XYZ).
We can create a Queue endpoint and send the data directly a Queue source that receives the outside event and forwards it to an SCDF queue.
What would be awesome is if DataFlow could connect the start of the queue for me, without repackaging the microservice as a Source.
The major issue with Spring Data Flow is that it does not automatically start up deployed streams when the server starts up, and we need to be reasonably sure that microservice is always up.
The lifecycle of the server is decoupled from the apps it deploys, that was intentional.
I'm not following your thoughts on how dataflow could connect the start of the queue, but from your description there's a few things you could do:
You would need to modify the app in order to have it registered with eureka, but this is a very simple operation, no more than a few lines of code:
You can either start from a stream app perspective: https://start-scs.cfapps.io/ , select http source, your binder, and then add the spring-cloud-netflix library as well as #EnableDiscoveryClient at the Main boot class
Start with http://start.spring.io Select Stream Rabbit or Stream Kafka, add Web and netflix libraries, then add the #EnableDiscoveryClient and #EnableBinding annotations and create a simple HTTP endpoint for your use case.
In any case should be a small addition.
You can also open an issue at :https://github.com/spring-cloud-stream-app-starters/http/issues suggesting that we add #EnableDiscoveryClient to the http source app, we can take that in consideration on our next iteration as well.
I'll try to clarify few bits.
upload images -> if it really is new -> queues it up for processing
Upon a new upload event, you'd want to process the image. Here's a similar use-case, but more of a real-time streaming style solution. This is not what you're looking to do, but I thought it might be useful.
Porting the image processing code to a Spring Cloud Stream application is as simple as adding #EnableBinding(Processor.class). It is the same business logic - whether you're running it separately or orchestrating it via SCDF, it is still a standalone microservice. However, SCDF expects it to be either a Source, Processor, Sink, or Task application types. We will be opening this up to support any arbitrary "functions" (lambdas) in the future release.
We can create a Queue endpoint and send the data directly a Queue source that receives the outside event and forwards it to an SCDF queue.
This is one of the standard solutions. You can directly consume new events (images) from a queue/topic and process it in the image-processor that we created in previous step. The named-channel support in DSL facilitates just that.
What would be awesome is if DataFlow could connect the start of the queue for me, without repackaging the microservice as a Source.
I'm not sure I understand this. If I were to assume, you're looking for "named-channel" as source and that is supported.
The major issue with Spring Data Flow is that it does not automatically start up deployed streams when the server starts up, and we need to be reasonably sure that microservice is always up.
The moment you deploy a Stream in SCDF, all the individual steps included in the DSL (i.e., stream definition) are resolved and deployed as standalone apps in the target runtime (cloud foundry, kubernetes, etc.,). Once deployed, it is left to the platform where the apps run for lifecycle management. SCDF does not retain or track the app states.
We have a requirement for one standalone Java application that can push JMS message to a JMS queue configured on Weblogic, Websphere and JBoss application server.
Is there any generic JMS client library available, that we can use in our application for pushing the messages to any or all of these servers?
As we understand, there is a specific JMS client for each server (for e.g. wljmsclient.jar required for Weblogic target server, as we would need weblogic.jndi.WLInitialContextFactory to be available as Initial Context factory class, similarly for Websphere and JBoss). And we would like to avoid having 3 different JMS client libraries (1 each for server) in the same application.
However, the catch here is, destination server is not known during compilation time. Only during runtime, it will be known whether the given message is to be pushed to Weblogic, Websphere or JBoss server or all of them. Hence, there is a need for a deployed application to support all 3 servers during runtime.
Is there any alternative generic JMS client library?
You can develop your own client that supports the 3 servers.
Basically you need to have for your standalone application :
The different JMS provider jar on the classpath
For example a jms_config.properties file which stores the configuration for each server (initial context factory, etc.)
Then from a generic code you can build the InitialContext, JMS Queues, etc. depending on the target server.
I am working on existing project which uses JMS and spring. I am new to JMS. I need to test that application. My aim is to test that my classes which are used in the application are executed or not.
So can anybody provide me a way in which I can test the application which uses JMS?
I have searched on Google but everyone get started with a sample application. But I want to test my existing application, meaning how my application gets connected with another module. Is there any tool like SOAPUI for JMS test of my existing application, meaning something from which I can execute my classes or listener.
Edit 1
There is a scenario in my project that my module listen a JMS Queue and send SMS or Email to user but actually I am not getting that how my module connected to other module can anybody give me a way so I can find in that direction means which services or APIs used in there.
Can anyone tell me what all can be the endpoints on an Websphere ESB. Like a web service, bpel process etc. what else can be there.!?
Thanks in advance!
Before I attempt to answer your question, I would like to bring something important to your attention
WebSphere ESB product has been sun-set. The product features have been merged into IIB (earlier called as WebSphere Message Broker).
If you are starting now, I would suggest that you explore IIB instead of WESB.
WESB can expose services via multiple means - WebServices (SOAP/HTTP and JMS) and messages (via both JMS or MQ).
In WESB terms we call them as exports. Exports are the ways by which a service is exposed to the external world.
Note: BPEL processes cannot be hosted in a WESB environment.
My j2EE app is currently running on ServiceMix. Now i want to add JMS to my app. The application should able to send/receive the JMS message to/from the queue that stays on MQSeries.
mq.hostname=10.3.6.19
mq.channel=CHANNEL
mq.queueManager=QManager
mq.port=1422
What i would like to do is:
1. Create a jndi.xml file and do configuration for jms stuff.
2. my app will initialize the context, look up jndi name, and create a connection, queueManager, queue. .etc
3. Develop send and receive methods.
My question is:
Can you tell me how to do 1st and 2nd steps.
(the script inside ServiceMix's jndi is diffrent with tomcat's
jndi and others.
ServiceMix using Spring based JNDI provider.
http://servicemix.apache.org/jndi-configuration.html)
I just ran into something similar with Weblogic. The following link uses spring-dm to integrate with websphere. It also takes it to the next logical step and adds camel to the mix.
http://lowry-techie.blogspot.com/2010/11/camel-integration-with-websphere-mq.html
Without using Spring-dm, you may run into classloader issues when trying to load the InitialContextFactory from the websphere jar (this is an issue I had with the Weblogic jar)