C# .Net Service with HttpSelfHostServer API access to class object - windows

I currently have a .Net Framework Windows service that basically uses a producer/consumer pattern and runs indefinitely. The producer runs on a timer and if there are any items to work on, it adds them to a ConcurrentQueue which I then have several consumer threads that will try to dequeue the item and perform a task. While performing this task the item is added to a ConcurrentDictionary.
What I am looking to do is create an API endpoint that another service can access to basically see what is in the queue and what is currently being worked on.
My thought was to use an HttpSelfHostServer and basically create 2 endpoints such as /GetCurrentQueue and /GetProcessing and each will return a Json object array of all items in each collection.
I guess my question is how can give I have the API Controller access to the ConcurrentQueue and ConcurrentDictionary which are part of another class in my service?

Related

Azure web app - how to avoid multiple users triggering the same endpoint at time

Is there any way to set a limit on the number of requests for the azure web app (asp.net web API)?
I have an endpoint that runs for a long time and I like to avoid multiple triggers while the application is processing one request.
Thank you
This would have to be a custom implementation which can be done in a few ways
1. Leverage a Queue
This involves separating the background process into a separate execution flow by using a queue. So, your code would be split up into two parts
API Endpoint that receives the request and inserts a message into a queue
Separate Method (or Service) that listens on the queue and processes messages one by one
The second method could either be in the same Web App or could be separated into a Function App. The queue could be in Azure Service Bus, which your Web App or Function would be listening in on.
This approach has the added benefits of durability since if the web app or function were to crash and you would like to ensure that requests are all processed, the message would be processed again in order if not completed on the queue.
2. Distributed Lock
This approach is simpler but lacks durability. Here you would simply use an in-memory queue to process requests but ensure only one is being processed at a time but having the method acquire a lock which the following requests would wait for before being processed.
You could leverage blob storage leases as a option for distributed locks.

Microservices: how to track fallen down services?

Problem:
Suppose there are two services A and B. Service A makes an API call to service B.
After a while service A falls down or to be lost due to network errors.
How another services will guess that an outbound call from service A is lost / never happen? I need some another concurrent app that will automatically react (run emergency code) if service A outbound CALL is lost.
What are cutting-edge solutions exist?
My thoughts, for example:
service A registers a call event in some middleware (event info, "running" status, timestamp, etc).
If this call is not completed after N seconds, some "call timeout" event in the middleware automatically starts the emergency code.
If the call is completed at the proper time service A marks the call status as "completed" in the same middleware and the emergency code will not be run.
P.S. I'm on Java stack.
Thanks!
I recommend to look into patterns such as Retry, Timeout, Circuit Breaker, Fallback and Healthcheck. Or you can also look into the Bulkhead pattern if concurrent calls and fault isolation are your concern.
There are many resources where these well-known patterns are explained, for instance:
https://www.infoworld.com/article/3310946/how-to-build-resilient-microservices.html
https://blog.codecentric.de/en/2019/06/resilience-design-patterns-retry-fallback-timeout-circuit-breaker/
I don't know which technology stack you are on but usually there is already some functionality for these concerns provided already that you can incorporate into your solution. There are libraries that already take care of this resilience functionality and you can, for instance, set it up so that your custom code is executed when some events such as failed retries, timeouts, activated circuit breakers, etc. occur.
E.g. for the Java stack Hystrix is widely used, for .Net you can look into Polly .Net to make use of retry, timeout, circuit breaker, bulkhead or fallback functionality.
Concerning health checks you can look into Actuator for Java and .Net core already provides a health check middleware that more or less provides that functionality out-of-the box.
But before using any libraries I suggest to first get familiar with the purpose and concepts of the listed patterns to choose and integrate those that best fit your use cases and major concerns.
Update
We have to differentiate between two well-known problems here:
1.) How can service A robustly handle temporary outages of service B (or the network connection between service A and B which comes down to the same problem)?
To address the related problems the above mentioned patterns will help.
2.) How to make sure that the request that should be sent to service B will not get lost if service A itself goes down?
To address this kind of problem there are different options at hand.
2a.) The component that performed the request to service A (which than triggers service B) also applies the resilience patterns mentioned and will retry its request until service A successfully answers that it has performed its tasks (which also includes the successful request to service B).
There can also be several instances of each service and some kind of load balancer in front of these instances which will distribute and direct the requests to an available instance (based on regular performed healthchecks) of the specific service. Or you can use a service registry (see https://microservices.io/patterns/service-registry.html).
You can of course chain several API calls after another but this can lead to cascading failures. So I would rather go with an asynchronous communication approach as described in the next option.
2b.) Let's consider that it is of utmost importance that some instance of service A will reliably perform the request to service B.
You can use message queues in this case as follows:
Let's say you have a queue where jobs to be performed by service A are collected.
Then you have several instances of service A running (see horizontal scaling) where each instance will consume the same queue.
You will use message locking features by the message queue service which makes sure that as soon one instance of service A reads a message from the queue the other instances won't see it. If service A was able to complete it's job (i.e. call service B, save some state in service A's persistence and whatever other tasks you need to be included for a succesfull procesing) it will delete the message from the queue afterwards so no other instance of service A will also process the same message.
If service A goes down during the processing the queue service will automatically unlock the message for you and another instance A (or the same instance after it has restarted) of service A will try to read the message (i.e. the job) from the queue and try to perform all the tasks (call service B, etc.)
You can combine several queues e.g. also to send a message to service B asynchronously instead of directly performing some kind of API call to it.
The catch is, that the queue service is some highly available and redundant service which will already make sure that no message is getting lost once published to a queue.
Of course you also could handle jobs to be performed in your own database of service A but consider that when service A receives a request there is always a chance that it goes down before it can save that status of the job to it's persistent storage for later processing. Queue services already address that problem for you if chosen thoughtfully and used correctly.
For instance, if look into Kafka as messaging service you can look into this stack overflow answer which relates to the problem solution when using this specific technology: https://stackoverflow.com/a/44589842/7730554
There is many way to solve your problem.
I guess you are talk about 2 topics Design Pattern in Microservices and Cicruit Breaker
https://dzone.com/articles/design-patterns-for-microservices
To solve your problem, Normally I put a message queue between services and use Service Discovery to detect which service is live and If your service die or orverload then use Cicruit Breaker methods

multiple workflow instances of Workflows with Windows Workflow Foundation

I'm new to WF. what I'm trying to do is to create a simple Workflow Service and Call them in various clients. So what i have done, I have created a Workflow service. It has a xamlx file and that has a sequence with Receive and Send Reply activity. I also have Correlations. So the first ReceiveandSendReply activity has CanCreateInstance True. In addition to this I
wrote some of my own code activities.
Now I have hosted this service is IIS and trying to call this service using a console app. I have added the web Reference and created a service client and passed the values to the service. It gives me expected results.
But when I'm trying to run another client at the same time it gives me Instance error. I think the Workflow is not initiating a new Instance for the second client.
So I did a search and found multiple instancing can be achieved by using workflowservicehost. But could not find a way to do it.
I think the way Im calling the service is not correct. I'm just creating a new object from the service reference and calling the operation.
Can anyone help me with this?
Please have a look at correlation rules you've set up for your workflow. If several clients passes parameters which correlate with the same instance - a new instance won't be created.
So, if you need a new instance you either need to set different correlation rules, so that different client's calls would correlate with different workflow instances.

In DDD, who should be resposible for handling domain events?

Who should be responsible for handling domain events? Application services, domain services or entities itself?
Let's use simple example for this question.
Let's say we work on shop application, and we have an application service dedicated to order operations. In this application Order is an aggregate root and following rules, we can work only with one aggregate within single transaction. After Order is placed, it is persisted in a database. But there is more to be done. First of all, we need to change number of items available in the inventory and secondly notify some other part of a system (probably another bounded context) that shipping procedure for that particular order should be started. Because, as already stated, we can modify only one aggregate within transaction, I think about publishing OrderPlacedEvent that will be handled by some components in the separate transactions.
Question arise: which components should handle this type of event?
I'd like to:
1) Application layer if the event triggers modification of another Aggregate in the same bounded context.
2) Application layer if the event trigger some infrastructure service.
e.g. An email is sent to the customer. So an application service is needed to load order for mail content and mail to and then invoke infrastructure service to send the mail.
3) I prefer a Domain Service personally if the event triggers some operations in another bounded context.
e.g. Shipping or Billing, an infrastructure implementation of the Domain Service is responsible to integrate other bounded context.
4) Infrastructure layer if the event need to be split to multiple consumers. The consumer goes to 1),2) or 3).
For me, the conclusion is Application layer if the event leads to an seperate acceptance test for your bounded context.
By the way, what's your infrastructure to ensure durability of your event? Do you include the event publishing in the transaction?
These kind of handlers belong to application layer. You should probably create a supporting application service's method too. This way you can start separate transaction.
I think the most common and usual place to put the EventHandlers is in the application layer. Doing the analogy with CQRS, EventHandlers are very similar to CommandHandlers and I usually put them both close to each other (in the application layer).
This article from Microsoft also gives some examples putting handlers there. Look a the image bellow, taken from the related article:

How do I access the WF 4 Receive activity from WcfTestClient

My workflow needs to wait for either an email approval via Bookmark or a WCF approval via Receive, so I used a Parallel activity. The email approval works just fine but I am trying to test the WCF and cant figure out what URL to use in WCF test client to access the Workflow.
I would be grateful for any leads because I am very new to WCF and am not very sure how to go about solving this problem.
You are using workflow service and then your second receive activity must be correlate with your first one and cancreateinstance check box set to false and service contract name is same as first one.
When you generate proxy for the workflow service, operation method is available for call from client.
You can refer this article
http://www.codeproject.com/Articles/50820/Establishing-Correlation-Between-Multiple-RECEIVE

Resources