What is essence of saga orchestrator - microservices

Two simple question about orchestartion
1- What is orchestrator essence? A seperate service that is common between services? Or a logic inside each service that created by each operation and will killed after operation finished?
2- Can we use TCP for connection between Orchestrator and participants or only message brokers accepted?

Related

Working of websocket services in clustered deployment

Lets say I have a websocket implemented in springboot. The architecture is microservice. I have deployed the service in kubernetes cluster and I have 2 running instance of the service, the socket implementation is using stomp and redis as broker.
Now the first connection is created between a client and one of the service. Does all the data flow occur through the client and the connected service? Would the other service also have a connection? Incase the current service goes down would the other service open up a connection?
Now lets say I'am sending some data back to the client which comes through a kafka topic. One of the either service could read it. If then would either of them be able to send the data back to the client?
Can someone help me understand these scenarios?
A websocket is a permanent connection. After opening it, it will be routed through kubernetes to a fixed pod. No other pod will receive the connection.
If the pod goes down, the connection is terminated.
If a new connection is created, for example by a different user, it may be routed to a different pod.
What data is transmitted, for example with kafka as source, is not relevant in this context. It could be anything.

How to funnel an API call to a specific service fabric node

I have exposed a websocket enabled service endpoint through Azure Application Gateway and the service is hosted on azure service fabric. Client initiates a websocket connection with my endpoint and is able to exchange data. During certain message flows, my Web Socket enabled service calls other services hosted on the service fabric using azure service bus. These are handled in a completely async manner. Once the other services finish processing, they post a message to the service bus which my WebSocket service reads back.
The problem I am having is to route the messages back to the right service fabric node so that it can be pushed back to the client at the other end of the WebSocket connection
In the picture below, you can imagine each node containing multiple services including the web socket enabled service. Once the Websocket service posts a message to the service bus, the downstream services start processing and finally they post a message back to the service bus which the websocket service reads back. Here a random node will pick up the message and it might not have the relevent websocket connection to push the processed data back
Sample Design
I have looked at redis pubsub model and it looks like I have to maintain last message processed on the nodes. It also means, every node on the cluster will need to read the message and discard it if they don't have the websocket connection with the client. I am looking for any suggested design models for this kind of problem
I ran into a similar scenario and didn't like the idea of using a new external service (Redis/SQL Server) as a backplane that would simply duplicate each message/event across all nodes.
The solution I settled on was to lean on a property of actor proxies, using actor events to call-back to a specific instance of a stateless service. Creating an actor service to act as a pub/sub backplane.
The solution is summarised in this blog post and this GitHub repo. It's worth pointing out that the documentation states actor events are best effort. This hasn't really been an issue when the application is running as normal, I presume that during a deployment or failover, some events may get lost, however this could be mitigated with additional work.
It's also worth noting that your load balancing rules should maintain sticky connections between clients and back-end instances. You could create separate rules for websockets if you only wanted this to apply to them and not your regular HTTP traffic.

more than one listener for the queue manager

Can there more than one listener to a queue manager ? I have used one listener/queue manager combination so far and wonder if this possible. This is because we have 2 applications connecting to same queue manager and seems to have problem with that.
There are a couple meanings for the term listener in an MQ context. Let's see if we can clear up some confusion over the terminology and then answer the question as it relates to each.
As defined in the spec, a JMS listener is an object that implements a callback mechanism. It listens on destinations for messages and calls onMessage when they arrive. The destinations may be queues or topics hosted by any JMS-compliant transport provider.
In IBM MQ terms, a listener is a process (runmqlsr) that handles inbound connection requests on a server. Although these can handle a variety of protocols, in practice they are almost exclusively TCP listeners that bind a port (1414 by default) and negotiate connection requests on sockets.
TCP Ports
Tim's answer applies to the second of these contexts. MQ can listen for sockets on multiple ports and indeed it is quite common to do so. Each listener listens on one and only one port. It may listen on that port across all network interfaces or can be bound to a specific network interface. No two listeners can bind to the same combination of interface and port though.
In a B2B context the best practice is to run a dedicated listener for each external business partner to isolate each of their connections across dedicated access paths. Internally I usually recommend separate ports for QMgr-to-QMgr, app-to-QMgr and interactive user connections.
In that sense it is possible to run multiple listeners on a given QMgr. Each of those listeners can accept many connections. Their job is to negotiate the connection then hand the socket off to a Message Channel Agent which talks to the QMgr on behalf of the remotely connected client or QMgr.
JMS Listeners
Based on comments, Ulab refers to JMS listeners. These objects establish a connection to a queue manager and then wait in GET mode for new messages arriving on a destination. On arrival of a message, they call the onMessage method which is an asynchronous callback routine.
As to the question "can there more than one (JMS) listener to a queue manager?" the answer is definitely yes. A multi-threaded application can have multiple listeners connected, multiple application instances can connect at the same time, and many thousands of application connections can be handled by a single queue manager with sufficient memory, disk and CPU available.
Of course, each of these applications is ultimately connected to one or more queues so then the question becomes one of whether they can connect to the same queue.
Many listeners can listen on the same queue so long as they do not get exclusive access to it. Each will receive a portion of the messages arriving.
Listeners on QMgr-managed subscriptions are exclusively attached to a dynamic queue but multiple instances on the same topic will all receive the same messages.
If the queue is clustered and there is more than one instance of it multiple listeners will be required to get all the messages since they will normally be distributed by MQ workload distribution across those instances.
Yes, you can create as many listeners as you wish:
http://www.ibm.com/support/knowledgecenter/SSFKSJ_9.0.0/com.ibm.mq.explorer.doc/e_listener.htm
However, there is no reason why two applications can't connect to the queue manager via the same listener (on the same port). What problem have you run into?

Realtime connection (SockJS/Socket.io) and Microservice application

Currently I'm building an application in a micro service architecture.
The first application is an API that does the user authentication, receive requests to initiate/keep a realtime connection with the user (via Socket.io or SockJS) and the system store the socket id into the User object.
The second application is a WORKER doing some stuff and sometime he has to send realtime data to the user.
The question is: How should the second application (the WORKER) send realtime data to the user?
Should the WORKER send a message to the API then the API forward this message to the user?
Or the WORKER can directly send the message to the user?
Thank you
In a perfect world example, the service that are responsible to send "publish" a real time push notifications should be separated from other services. Since the micro service is a set of narrowly related methods, and there is no relation between the authentication "user" service, and the realtime push notification service. And to a deep break down, the authentication actually is a separate service, this only FYI, There might be a reason you did this way.
How the service would communicate? There is actually many ways how to implement the internal communication between the services, MQ solution, which could add more technology to your stack, like Rabbit MQ, Beanstalk, Gearman, etc...
And also you can do the communication on top of HTTP protocal, but you need to consider that the HTTP call will add more cost.
The perfect solution is that each service will have to interfaces to execute on their behalf, HTTP interface and an MQ interface (console)

How to setup a singleton broker in zeromq?

I'm trying to design a system using ZeroMQ where I have a bunch of consumers that each want to use a service, and I need a way for the consumers to start the desired service if it's not started yet. I can visualize having a broker do this (useful to do so, since both consumers and service are dynamic, and their endpoints are TCP ports that can't be known before runtime), but then I have to figure out how to start the broker if it hasn't started yet.
How do you reliably start a single broker using ZeroMQ? The call to bind() seems to accept more than one socket.

Resources