local AMQP messages between MicroServices on same machine - microservices

I have developed a modular system based on MicroServices, that can be deployed in one or different machines.
Each MicroService actually is a package, and communicate to other MSs sending AMQP messages.
In this way each server can specify which MicroService to deploy, installing the relative package.
So there is a MicroService that exposes http REST API, and asks reqs to other MicroServices by AMQP messages.
This is an example: each square is a server. Each server can deploy one or MicroServices. So there is one MS listening for http reqs, that sends requests through CloudAMQP channels to other MSs.
+=============+
| MSs |
| - AUTHORIZE |
amqp | - LOG |<---
+=========+---------->+=============+ |
http:// ==>| MS | |
|REST_API | | amqp
+=========+---------->+=============+ |
amqp | MS |<---
| - REPORT |
+=============+
Now, the problem is that also MSs in the same server, exchange amqp messages through the channel, as if they were deployed in different servers.
This could be not a problem, but consumes credits on CloudAMQP.
Looking at the example above, a user tries lo login, making a POST on MS_REST_API. This Ms send an amqp message to MS_AUTHORIZE that, in turn, sends an amqp message to MS_LOG to log the event.
Any idea on simulating sending amqp messages, without using CloudAMQP but something local, for MSs deployed on the same machine?

I would advise you against trying to optimize a local-service path unless there is some huge factor you haven't described above. The complexity and opportunity for errors go way up if you try this. Consider, for example, a service that needs to interact with services both local and on other nodes.
If there is a reason to change your approach, I'd recommend looking at something like Apache Qpid Dispatch Router that knows where the applications are and can route them.

Related

Routing messages from Kafka to web socket clients connected to application server cluster

I would like to figure out the best way to route messages from Kafka to web socket clients connected to a load balanced application server cluster. I understand that spring-kafka facilitates consuming and publishing messages to a kafka topic, but how does this work in a load balanced application server scenario when connecting to a distributed kafka topic. Here are the requirements that I would like to satisfy, with the overall goal of facilitating peer to peer messaging in an application with a very, very large volume of users:
Web clients can connect to a tomcat application server via web sockets connection via a load balancer.
Web client can send a message/notification to another client thats connected to different tomcat application server.
Messages are saved in the database and published to a kafka topic/partition that can be consumed by the appropriate web clients/users.
Kafka can be scaled to many brokers with many consumers.
I can see how this can be implemented quite easily in a single application server scenario where the consumer consumes all messages from a kafka topic and re-distributes via spring messaging/websockets. But I can't figure out how this would work in a load balanced application server scenario where there are consumers on each application server forming an overall consumer group for the kafka topic. Assuming that each of the application servers are are consuming sub-sets/partitions of the kafka topic, how do they know which server their intended recipients are connected to? And even if they knew which server their recipients were connected to, how would they route the message to them via websockets?
I considered that the application server load balancing could work by logging users with a particular routing key (users starts with 'A' etc) on to a specific application server, then only consuming messages for users starts with 'A' on that application server. But this seems like it would be difficult to maintain and would make autoscaling very difficult. This seems like it should be an common scenario to implement but I can't find any tools or approaches that fit this scenario.
Sounds like every single consumer should live in its own consumer group. This way all the available consumers are going to consume all the messages sent to the topic. Therefore all the connected websocket clients are going to be notified with those messages.
If you need more complex logic with those messages at
after consuming, e.g. filtering, routing, transforming, aggregating etc., you should consider to involve Spring Integration in you project: https://spring.io/projects/spring-integration
Broadcast to all the consumer may work, but the most efficient solution should route message to the node holds the websocket connection for the target user precisely. As i know, route in a distributed system can be done as follows:
Put the route information in a middleware,such as Redis; Or implement a service by yourself to keep track of all the ssesions. That is, solved in a centralized way.
Let the websocket server find route by themselves. In this circumstance, consensus algorithm like gossip should be taken into consideration.

I need to build a Vert.x virtual host server that channels traffic to other Vert.x apps. How is this kind of inter-app communication accomplished?

As illustrated above, I need to build a Vert.x Java app that will be an HTTP server/virtual host (TLS Http traffic, Web socket traffic) that will redirect/channel specific domain traffic to other Vert.x Java apps running on the same server, each in it's own JVM.
I have been reading for days but I remain uncertain as to how to approach all aspects of the task.
What I DO know or have experience with:
Creating an HTTP server, etc
Using a Vert.x VirtualHost handler to "handle" incoming traffic for a
specific domain
What I DO NOT know:
How do I "re-direct" a domain's traffic to another Vert.x app (this
other Vert.x app would also be running on the same server, in its own
JVM).
- Naturally this "other" Vert.x app would need to respond to HTTP
requests, etc. What Vert.x mechanisms do I employ to accomplish this
aspect of the task?
Are any of the following concepts part of the solution? I'm unfamiliar with these concepts and how they may or may not form part of the solution.:
Running each Vert.x app using -cluster option?
Vert.x Streams?
Vert.x Pumps?
There are multiple ways to let your microservices communicate with each other, the fact that all your apps are running on the same server doesn't change much, but it makes number 2.) easy to configure
1.) Rest based client - server communication
Both host and apps have a webserver
When you handle the incoming requests on the host, you simply call another app with a HttpClient
Typically all services find each others address via service discovery.
Eg: each service registers his address in a central registry then other services use this central registry to find the addresses.
Note: this maybe an overkill for you and you can just configure the addresses of the other services.
2.) You start the vertx microservices in clustered mode
the eventbus is then shared among the services
For all incoming requests you send a broadcast on the eventbus
the responsible app replies to the message
For further reading you can checkout https://vertx.io/docs/vertx-hazelcast/java/#configcluster. You start your projects with -cluster option and define the clustering in an xml configuration. I think by default it finds the services via local broadcast.
3.) You use a message broker like RabbitMq etc.
All your apps connect to a central message broker
When a new request comes in to the host, it sends a message to the message broker
The responible app then listens to the relevant messages and replies
The host receives the reply from the message broker
There are already many existing vertx clients for certain message brokers like kafka, camel, zeromq:
https://github.com/vert-x3/vertx-awesome#integration

Do I need session-clustering on a DB for load balancing a Jetty WebSockets server with HAProxy on AWS/EC2?

I am writing a chat-like application using WebSockets using a Jetty 9.3.7 WebSockets server running on AWS/EC2. A description of the architecture is below:
(a) The servers are based on HTTPS (wss). I am thinking of using HAProxy using IP hash-based LB for this. The system architecture will look like this:
-->wss1: WebSocket server 1
/
clients->{HAProxy LB} -->wss2: WebSocket server 2
(a, b,..z) \
-->wss3: WebSocket server 3
I am terminating HTTPS/wss on the LB per these instructions.
(b) Clients a...z connect to the system and will connect variously to wss1, wss2 or wss3 etc.
(c) Now, my WebSocket application works as follows. When one of the clients pushes a message, it is sent to the WS server the client is connected to (say wss1, and then that message is disseminated to a few of the other clients (the set of clients being programmatically determined at my WebSocket application running on wss1). E.g., a creates a message Hey guys! and pushes it to wss1, which is then pushed to clients b and c so that b and c receive Hey guys! message. b has a WebSocket connection to server wss2 and c has a WebSocket connection to wss3.
My question is, to push the message from the message receiving server, like (c) above, wss1 needs to know the WebSocket session/connection to b and c which may well be on a different WebSocket server. Can I use session clustering on Jetty to retrieve the sessions b and c are connected to? If not, what's the best way to provide this lookup while load balancing Jetty WebSockets?
Second, if I do use session clustering or some such method to retrieve the session, how can I use the sessions for b and c on wss1 to send the message to b and c? It appears like there is no way to do this except with some sort of communication between the servers. Is this correct?
If I have to use session clustering for this, is there a github example you can point me to?
Thanks!
I think session clustering is not a right tool. Message Oriented Middleware (MOM) supporting publish and subscribe model should be enough to cluster multiple real-time applications. As an author of Cettia, a real-time application framework, I've used publish and subscribe model to scale application horizontally.
The basic idea is
A message to be exchanged through MOM is an operation applying to each server locally. For example, operation can be 'sending a message to all clients'. Here all clients means ones that connect to server to execute a given operation.
Every server subscribe the same topic of MOM. When some message is given by the topic, server deserializes it into operation and executes the operation locally. It happens on every server.
If some operation happens on some server, that server should serialize it into message and publish it to the topic.
With Cettia, all you need is to plug your MOM into cettia application. If you want to make it from scratch, you need to implement the above ideas.
http://cettia.io/projects/cettia-java-server/1.0.0-Beta1/reference/#clustering
https://github.com/cettia/cettia-java-server/blob/1.0.0-Beta1/server/src/main/java/io/cettia/ClusteredServer.java
Here's working examples per some MOMs. Though they are examples written in Cettia, it might help you understand how the above idea works.
AMQP 1
Hazelcast 3
jGroups 3
JMS 2
Redis 2
Vert.x 2

Some kind of proxy to investigate requests

I need some kind of proxy which sits between client and server and simply dumps all request, but also forwards the requests to allow communication between client and server.
|--------| |-------| |--------|
| client |-----------| proxy |-----------| server |
|--------| |-------| |--------|
Reason I need this: I'm working on a client software and want to see how the (REST) requests actually are sent to the server. For example, I want to see the multipart POST entities, etc.
Off course I can use netcast to "emulate" the server, but this does not actually result in traffic going from client to server and vice versa. The ideal situation would be a proxy listening on port yyy which forwards all traffic from the client machine to host:yyy.
So my main question is: what kind of proxy am I looking for? Is it a forward proxy? Or a transparent proxy?
If you are on Windows, you could be looking for the Fiddler HTTP proxy.
It is a web development tool that, when run will set itself up as a global proxy for your computer. It will show all http requests that your computer makes, with various options to inspect/replay/alter them.
You can get it at: http://www.telerik.com/fiddler

Configuring JMS over a Weblogic Cluster

I have a setup of 2 WLS managed servers configured as part of a WLS cluster.
1) The requirement is to send requests to another system and receive responses using JMS as interface.
2) The request could originate from either of the Managed Servers. So the corresponding response should reach the managed server which originated the request.
3) The external system (to which requests are sent) should not be aware of how many managed servers are in the cluster (not a must have requirement)
How should JMS be configured for meeting these requirments?
Simple! Setup a response queue for each managed server and add a "reply-to" field in the messages you send to the other system. The other system will then ask the request where to send the reply. Deploy one Message Driven Bean (MDB) on each managed server (i.e. not on the cluster, one per managed server) to consume reply messages send to reply queues. Note that you might want to use clustered reply queues and persistent messages for load balancing and failover.
This is actually a combination of the Request-Reply and the Return Address patterns and is illustrated by the picture below:

Resources