I am trying to come up a good way to "pause" the processing of messages with a Mass Transit ServiceBus. Basically I have a requirement to leave my Windows Services running, but temporarily stop processing of messages.
I can only think of two ways to do this, use the subscription token, or dispose of the service bus.
Is there a preferred way of doing this, or I am heading down the wrong path?
Assuming you have some other code that needs to keep running, maybe its worth the effort to just split it into separate services?
The longer way or perhaps complimentary to the first, that offers control via messaging is to use "control" bus, i.e. another endpoint, configured with a service that can create/dispose the whole container used for the messaging infrastructure.
Another way would be to use Topshelf in its "shelves" configuration - prior to Topshelf 3 you could externally control which shelf is operating. Place the whole message processing domain into the shelf you want to control independently from the service and the rest of the service - on itsown shelf.
The downsides being:
it's a lot harder to debug services (interactively).
it will require repackaging of the service (the .exe is no longer yours, only services are).
Related
newbie here.
Reading the docs I understand we can use an incoming HTTP request as a trigger to wake up a suspended activity.
In my case, the business trigger is the arrival of a message on a bus (from another system)…..
I thought of building out dedicated hosted service that just listens to messages arriving on the bus and invoke / trigger the respective activities....
Would I be following the suggested patterns if I do that ? It feels wrong as I'd be writing some custom external code rather than relying on the declarative approach usually described in the ELSA docs...
Any thoughts welcome..
This is a great question. Both patterns are great and in fact, the declarative approach depends on supporting infrastructure (such as hosted services).
For example, let's take the HttpEndpoint and AzureServiceBusMessageReceived activities.
Both of them require supporting infrastructure:
HttpEndpoint depends on ASP.NET Core middleware to trigger workflows as HTTP requests come in
AzureServiceBusMessageReceived depends on a hosted service that contains message workers to trigger the appropriate workflows.
For your case, you don't have to write your own hosted service if you can use one of the existing messaging activities, since it's already done for you.
At the same time, it's perfectly OK to just have your own hosted service that consumes messages and trigger workflows yourself. You could make it even a bit fancier by having your hosted service trigger business-specific activities.
For example, rather than triggering some low-level "message received" activity, you could trigger a "order created" activity if that is what the message is all about.
More details about implementing these types of activities can be found https://elsa-workflows.github.io/elsa-core/docs/guides/guides-blocking-activities.
As you already discovered, there are also examples in the repository https://github.com/elsa-workflows/elsa-core/tree/master/src/samples.
I was only considering the Elsa Guides, but just discovered a whole list of additional samples in the Elsa-Core project itself. In particular, there are several examples that seem to handle my use case (example Elsa.Samples.RabbitMqWorker)....
We are designing a reporting system using microservice architecture. All the services are supposed to be subscribers to the event bus and they communicate by raising events. We also decided to expose each of our services using REST api. Now the question is , is it a good idea to create our services as web api [RESTful] applications which are also subscribers to the event bus? so basically there are 2 ponits of entry to each service - api and events. I have a feeling that we should separate out these 2 as these are 2 different concerns. Any ideas?
Since Microservices architecture are Un-opinionated software design. So you may get different answers on this questions.
Yes, REST and Event based are two different things but sometime both combined gives design to achieve better flexibility.
Answering to your concerns, I don't see any harm if REST APIs also subscribe to a queue as long as you can maintain both of them i.e changes to message does not have any impact of APIs and you have proper fallback and Eventual consistency mechanism in place. you can check discussion . There are already few project which tried it such as nakadi and ponte.
So It all depends on your service's communication behaviour to choose between REST APIs and Event-Based design Or Both.
What you do is based on your requirement you can choose REST APIs where you see synchronous behaviour between services
and go with Event based design where you find services needs asynchronous behaviour, there is no harm combining both also.
Ideally for inter-process communication protocol it is better to go with messaging and for client-service REST APIs are best fitted.
Check the Communication style in microservices.io
REST based Architecture
Advantage
Request/Response is easy and best fitted when you need synchronous environments.
Simpler system since there in no intermediate broker
Promotes orchestration i.e Service can take action based on response of other service.
Drawback
Services needs to discover locations of service instances.
One to one Mapping between services.
Rest used HTTP which is general purpose protocol built on top of TCP/IP which adds enormous amount of overhead when using it to pass messages.
Event Driven Architecture
Advantage
Event-driven architectures are appealing to API developers because they function very well in asynchronous environments.
Loose coupling since it decouples services as on a event of once service multiple services can take action based on application requirement. it is easy to plug-in any new consumer to producer.
Improved availability since the message broker buffers messages until the consumer is able to process them.
Drawback
Additional complexity of message broker, which must be highly available
Debugging an event request is not that easy.
I am looking for solutions for a scenario.
Let's assume a service-oriented architecture (SOA) with hundreds of services. The services are completely isolated – what is behind their APIs is an implementation detail.
Different services can have different security policies – i.e. who can access the service. For example, a service can be fully public, accessible to a subset of employees, accessible to a subset of other services, etc. Some services may even have that specified on the API level, for example a public service with some internal API calls (is that a bad idea?).
I have touched a bit on ZMQ but not enough to know if this interconnection of services can be accomplished with ZMQ. Any help to decide on whether to continue concentration on ZMQ or not will be highly appreciated.
Are you asking about how to handle security in a SOA? Or are you asking whether or not it is feasible to build a SOA with 0MQ?
The former requires you to build it yourself. You need to define your own security policy between services. Not really 0MQ's domain.
For the latter, yes, 0MQ should allow you to build a SOA architecture. In fact we're doing it right now. Services are encapsulated into containers with a HTTP endpoint handled by nginx, which then reverse proxies the request to a (one or more) nodejs server within through express, which then PUSH messages to workers' PULL sockets on a fair queue basis. Upon finishing processing the request, the worker PUSH its reply back to the server's PULL socket. This way we can spin up or any number of workers we want with minimal disruption to the server. And this is one service.
Service to service communications is handled through REST-over-HTTP.
This is more a theoretical question than a practical one, but given I undestand the principles of SOA I am still a bit unsure about if this can be applied to any app.
The usual example is where a client wants to know something from a server thus we implement a service that can provide that information given a client request, it can be stateless or statefull, etc.
But what happens when we want to be notified when something happens on the server, maybe we call a service to register a search and want to be notified when a new item arrives to the server that matches or search.
Of course that can be implemented using polling and leverage that using long timeouts, but I can not see a way in the usual protocols to receive events from the server without making a call to ask.
If you can point me to an example, or tell me an architecture that could support then you have made my day.
Have you considered pub-sub (ie; WS-Eventing, WS-Notification)? These are the usual means to pushing "stuff" to interested consumers/subscribers.
You want to use a Publish-Subscribe design. If you are using WCF checkout Programming WCF by Juval Lowery. In the appdendix he shows how to build a Pub-Sub system that is actually fully Per-Call. It doesn't even rely on CallbackContracts and keeping long running Channels open and so doesn't require any reconnection logic when communication is broken...let alone the need for any polling.
My company is looking at implementing a new VPN solution, but require that the connection be maintained programatically by our software. The VPN solution consists of a background service that seems to manage the physical connection and a command line/GUI utilty that initiates the request to connect/disconnect. I am looking for a way to "spy" on the API calls between the front-end utilty and back-end service so that our software can make the same calls to the service. Are there any recommended software solutions or methods to do this?
Typically, communications between a front-end application and back-end service are done through some form of IPC (sockets, named pipes, etc.) or through custom messages sent through the Service Control Manager. You'll probably need to find out which method this solution uses, and work from there - though if it's encrypted communication over a socket, this could be difficult.
Like Harper Shelby said, it could be very difficult, but you may start with filemon, which can tell you when certain processes create or write to files, regmon, which can do the same for registry writes and reads, and wireshark to monitor the network traffic. This can get you some data, but even with the data, it may be too difficult to interpret in a manner that would allow you to make the same calls.
I don't understand why you want to replace the utility, instead of simply running the utility from your application.
Anyway, you can run "dumpbin /imports whatevertheutilitynameis.exe" to see the static list of API function names to which the utility is linked; this doesn't show the sequence in which they're called, nor the parameter values.
You can then use a system debugger (e.g. Winice or whatever its more modern equivalent might be) to set breakpoints on these API, so that you break into the debugger (and can then inspect parameter values) when the utility invokes these APIs.
You might be able to glean some information using tools such as Spy++ to look at Windows messages. Debugging/tracing tools (Windbg, or etc.) may allow you to see API calls that are in process. The Sysinternals tools can show you system information to some degree of detail of usage.
Although I would recommend against this for the most part -- is it possible to contact the solution provider and get documentation? One reason for that is fragility -- if a vendor is not expecting users to utilize that aspect of the interface, they are more likely to change it without notice.