What is the difference between JMS client and JMS broker? [closed] - jms

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
What is the basic difference between a JMS client and a JMS broker?

A good explanation comes from the JMS 2 specification itself in section 2.2. Keep in mind that it uses the term provider instead of broker.
JMS Clients - These are the Java language programs that send and
receive messages.
...
JMS Provider - This is a messaging system that implements JMS in
addition to the other administrative and control functionality required of
a full featured messaging product.

Related

How to provide a document for the message formats an activemq instance requires to communicate with? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I am trying to provide some services over ActiveMQ using Camel routing features. But I need my clients to know what kind of messages they can send over the ActiveMQ. I am thinking of something like swagger documentation for Spring MVC rest APIs. Is there any mechanism for that or I should do it manually?
What you're asking for isn't really the way messaging works. ActiveMQ is a message broker. Each protocol which the broker supports can have client implementations in essentially any language on any platform and each such client implementation would have it's own API documentation.
ActiveMQ does provide a JMS client implementation as that is expected for JMS providers. You can read JMS 1.1 specification or peruse the JavaDoc in order to understand the API better.
Aside from that, ActiveMQ supports the following protocols:
AMQP 1.0
STOMP 1.0, 1.1, & 1.2
MQTT 3.1
Again, each of these protocols will have various client implementations with their own documentation.
These protocols would be akin to HTTP in your REST use-case. They are essentially a transport mechanism. You will have to specify message formats in order to exchange data between applications. These message formats will be akin your REST API.
Thanks to #Helen's comment I found out about AsyncAPI. It provides documentation and code generation tools for services provided over event-driven architectures. It is based on OpenAPI specifications like Swagger. As stated in AsyncAPI specifications V2.1.0:
The AsyncAPI Specification is a project used to describe and document message-driven APIs in a machine-readable format. It’s protocol-agnostic, so you can use it for APIs that work over any protocol (e.g., AMQP, MQTT, WebSockets, Kafka, STOMP, HTTP, Mercure, etc).
The AsyncAPI Specification defines a set of files required to describe such an API. These files can then be used to create utilities, such as documentation, integration and/or testing tools.
You just have to create a YAML or JSON file. They provide multiple generators that generate codes and documents using your specification files. I used their HTML generators to generate my documents.
Also, this is a good example of how to define your specifications based on AsyncAPI.

Advantages and disadvantages of Spring caching, Hazelcast and Redis? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
Well I'm quite new in this caching thing, so wondering what are the benefits this caching mechanisms provide. Thank you.
In general, you can combine the Spring caching with either Hazelcast or Redis as an underlying cache provider.
By default, Spring uses ConcurrentHashMap for caching which provides only basic functionality and is not distributed. On the other hand, Hazelcast and Redis are distributed caching solutions.
Here are some good resources on this topic:
Caching with Spring Boot and Hazelcast
DB Engines: Comparison Hazelcast vs. Memcached vs. Redis
DZone: Hazelcast vs ElastiCache (Memcached)
Compare Hazelcast and Redis

In Spring boot application which Integration tool (Apache camel or Spring integration) easy or better for IBM MQ [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
We are using spring boot application with IBM MQ, so Which integration tool is better Apache camel or Spring integration
as I'm new for these technologies and it's higher priority decision so I'm making it a separate question, if it exists so could you please comment link then I will refer it.
Please be aware that Apache Camel uses Spring JMS for the messaging component. So your are looking to use Spring JMS regardless of the other technologies. If you are looking at Camel and Spring Boot, have a look at Spring Boot - Camel - IBM MQ. This example makes good use of connection pooling.
The example also uses JTA synchronized transactions for reliable message delivery (DUPS_OK mode, not XA). This can be difficult to achieve.

Spring Kafka or Kafka streams for a high volume data processing Spring boot application [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I am working on to create a high volume JSON data processing application for a bank using Spring boot, Kafka and QuickFIX/J. This is my first time play with technologies like Kafka and QuickFIX/J and unable to decide that should I use plain Kafka spring or Kafka streams or spring cloud streams.
Here is the requirement:
Read data from multiple Kafka topics
Process and send the data to a QuickFIX/J initiator that further sends it to an external FIX engine
A QuickFIX/J acceptor receives the data from external FIX engine and write it back again to multiple Kafka topics, but different ones this time
I have gone through tutorials/articles that say Kafka streams or spring cloud stream is good if you have both consumer/producer, performing high volume data streaming and want to achieve exactly once processing. But, here I need to send data to an external party after processing, receive it and then write to Kafka topics.
Is using Kafka stream a good choice? or shall I use spring kafka with normal producers & consumers?
Spring Cloud Stream is just a higher level, opinionated, abstraction on top of Spring for Apache Kafka. It can handle your use case (there are several "sink" sample applications).
Similarly, Kafka Streams does not necessarily have to produce output to Kafka (although that's what it is designed to do).
Probably the fastest on-ramp is Spring Cloud Stream (or Spring for Apache Kafka with Spring Boot) because most cookie-cutter configuration is provided for you and you can just concentrate on your business logic.

Benefits of having HTTP endpoints return Flux/Mono instances instead of DTOs [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I've watched Spring Tips: Functional Reactive Endpoints with Spring Framework 5.0 and read a little about spring reactor but I can't quite understand it.
What are the benefits of having endpoints return Flux/Mono instances (jacksonified) instead of straight up dto objects (jacksonified), given that I've got netty and spring reactor active? I initially assumed that reactive streams would, in http request/response context, work more like websockets wherein the server pushes the data to the receiver with an open channel but this doesn't seem to be the case.
Also what does netty actually do better in reactive programming than tomcat?
I'm sorry if these questions seem stupid but I don't quite understand the purpose of this new framework direction. Why did it come about, how does it work and what problems does it solve?
I highly suggest you watch the recently presented in Devoxx Belgium "Reactive Web Application with Spring 5" by Rossen Stoyanchev.
In there he talks about how the Reactive Web Controller (presented below) on the surface looks like Spring MVC HTTP Servlet Request/Response Controller but it's actually not
#GetMapping("/users/{id}")
public Mono<User> getUser(#PathValiable Long id) {
return this.userRepository.findById(id);
}
#GetMapping("/users")
public Flux<User> getUsers() {
return this.userRepository.findAll();
}
he talks about how Servlet 3.1 although non-blocking doesn't truely work for fully reactive and how the glue code connecting the Servlet 3.1 and Reactive Streams is implemented as part of the Spring 5 changes for the Servlet 3.1 compliant web containers (Jetty and Tomcat).
And of course he is touching on fully Reactive non-blocking compliant servers (Netty, Undertow) are supported to run Reactive Streams.
It's not right to mean that Netty is better than tomcat.
The implementation is different. Tomcat uses java NIO to implement servlet 3.1 spec. Meantime, netty uses NIO as well but introduces custom api.
If you want to get insight in how does servlet 3.1 implemeted in Netty, watch this video https://youtu.be/uGXsnB2S_vc

Resources