I am working on an application where i have separated out two different XPC services from the main application. I want a XPC service to communicate with other XPC service which will do some processing and will return the data back to first service and that first service will do its own processing and then will give data back to the main application. I even tried this but communicating between the services give error that "could not communicate with helper application".
My question is that either this is possible or not? If yes that what is required?
Any help would be appreciated.
Yes, this is possible, but not at all obvious. I asked questions about this exact thing on and off for a year before an obscure hint from an Apple engineer led me to stumble across the answer.
The trick is that you need to transfer the NSXPCListenerEndpoint of one process to another process. That second process can then use that endpoint object to create a direct connection with the first process. The catch is that, while NSXPCListenerEndpoint is NSCoding compliant, it can only be encoded through an existing XPC connection, which makes this problem sound like a catch-22 (you can't transfer the endpoint until you've created a connection, and you can't create a connection until you have the endpoint).
The solution ("trick") is you need an intermediating process (let's call it "cornerstone") that already has an XPC connections that can exchange endpoints between the other two processes.
In my application I ended up creating a daemon process which acts as my cornerstone, but I think you could do it directly in your application. Here's what you need to do:
Create an application with two XPC services, "A" and "B"
In "A" get the listener object for the process: either get the service listener created automatically (listener = NSXPCListener.serviceListener) or create a dedicated, anonymous, listener for the second process (using listener = NSXPCListener.anonymousListener).
Get the endpoint of the listener (listener.endpoint)
The application should ask "A" for its endpoint.
The application can then launch "B" and, using XPC again, pass the endpoint it got from "A" to "B".
"B" can now use the endpoint object it obtained from "A" (via the application) to create a direct connection to "A" using [[NSXPCConnection alloc] initWithListenerEndpoint:aEndpoint]].
So I have found that two processes are inevitably going to be unable to communicate to the same XPCService. That is because if you try to launch an XPCService, it will be a unique process to the launcher. And as far as I can tell, you can only communicate with an XPCService that your process launched.
So I believe your second XPCService will be unable to "launch" the first XPCService, and therefore will be unable to communicate with it.
The best you can probably do is have your second XPCService communicate back to your main application process, which then communicates to the first XPCService.
You could do something like:
[[self.firstXPCConnection remoteObjectProxy] getSomeString:^(NSString *myString) {
[[self.secondXPCConnection remoteObjectProxy] passSomeString:myString];
}];
Though disclaimer, I haven't tried this. But the best I can help you with the knowledge I have
Related
Problem:
Suppose there are two services A and B. Service A makes an API call to service B.
After a while service A falls down or to be lost due to network errors.
How another services will guess that an outbound call from service A is lost / never happen? I need some another concurrent app that will automatically react (run emergency code) if service A outbound CALL is lost.
What are cutting-edge solutions exist?
My thoughts, for example:
service A registers a call event in some middleware (event info, "running" status, timestamp, etc).
If this call is not completed after N seconds, some "call timeout" event in the middleware automatically starts the emergency code.
If the call is completed at the proper time service A marks the call status as "completed" in the same middleware and the emergency code will not be run.
P.S. I'm on Java stack.
Thanks!
I recommend to look into patterns such as Retry, Timeout, Circuit Breaker, Fallback and Healthcheck. Or you can also look into the Bulkhead pattern if concurrent calls and fault isolation are your concern.
There are many resources where these well-known patterns are explained, for instance:
https://www.infoworld.com/article/3310946/how-to-build-resilient-microservices.html
https://blog.codecentric.de/en/2019/06/resilience-design-patterns-retry-fallback-timeout-circuit-breaker/
I don't know which technology stack you are on but usually there is already some functionality for these concerns provided already that you can incorporate into your solution. There are libraries that already take care of this resilience functionality and you can, for instance, set it up so that your custom code is executed when some events such as failed retries, timeouts, activated circuit breakers, etc. occur.
E.g. for the Java stack Hystrix is widely used, for .Net you can look into Polly .Net to make use of retry, timeout, circuit breaker, bulkhead or fallback functionality.
Concerning health checks you can look into Actuator for Java and .Net core already provides a health check middleware that more or less provides that functionality out-of-the box.
But before using any libraries I suggest to first get familiar with the purpose and concepts of the listed patterns to choose and integrate those that best fit your use cases and major concerns.
Update
We have to differentiate between two well-known problems here:
1.) How can service A robustly handle temporary outages of service B (or the network connection between service A and B which comes down to the same problem)?
To address the related problems the above mentioned patterns will help.
2.) How to make sure that the request that should be sent to service B will not get lost if service A itself goes down?
To address this kind of problem there are different options at hand.
2a.) The component that performed the request to service A (which than triggers service B) also applies the resilience patterns mentioned and will retry its request until service A successfully answers that it has performed its tasks (which also includes the successful request to service B).
There can also be several instances of each service and some kind of load balancer in front of these instances which will distribute and direct the requests to an available instance (based on regular performed healthchecks) of the specific service. Or you can use a service registry (see https://microservices.io/patterns/service-registry.html).
You can of course chain several API calls after another but this can lead to cascading failures. So I would rather go with an asynchronous communication approach as described in the next option.
2b.) Let's consider that it is of utmost importance that some instance of service A will reliably perform the request to service B.
You can use message queues in this case as follows:
Let's say you have a queue where jobs to be performed by service A are collected.
Then you have several instances of service A running (see horizontal scaling) where each instance will consume the same queue.
You will use message locking features by the message queue service which makes sure that as soon one instance of service A reads a message from the queue the other instances won't see it. If service A was able to complete it's job (i.e. call service B, save some state in service A's persistence and whatever other tasks you need to be included for a succesfull procesing) it will delete the message from the queue afterwards so no other instance of service A will also process the same message.
If service A goes down during the processing the queue service will automatically unlock the message for you and another instance A (or the same instance after it has restarted) of service A will try to read the message (i.e. the job) from the queue and try to perform all the tasks (call service B, etc.)
You can combine several queues e.g. also to send a message to service B asynchronously instead of directly performing some kind of API call to it.
The catch is, that the queue service is some highly available and redundant service which will already make sure that no message is getting lost once published to a queue.
Of course you also could handle jobs to be performed in your own database of service A but consider that when service A receives a request there is always a chance that it goes down before it can save that status of the job to it's persistent storage for later processing. Queue services already address that problem for you if chosen thoughtfully and used correctly.
For instance, if look into Kafka as messaging service you can look into this stack overflow answer which relates to the problem solution when using this specific technology: https://stackoverflow.com/a/44589842/7730554
There is many way to solve your problem.
I guess you are talk about 2 topics Design Pattern in Microservices and Cicruit Breaker
https://dzone.com/articles/design-patterns-for-microservices
To solve your problem, Normally I put a message queue between services and use Service Discovery to detect which service is live and If your service die or orverload then use Cicruit Breaker methods
What is the best architecture, using Service Fabric, to guarantee that the message I need to send from Service 1 (mostly API) to Service 2 (mostly API) does not get ever lost (black arrow)?
Ideas:
1
1.a. Make service 1 and 2 stateful services. Is it a bad call to have a stateful Web API?
1.b. Use Reliable Collections to send the message from API code to Service 2.
2
2.a. Make Service 1 and 2 stateless services
2.b. Add a third service
2.c. Send the message over a queuing system (i.e.: Service Bus) from service 1
2.d. To be picked up by the third service. Notice: this third service would also have access to the DB that service 2 (API) has access to. Not an ideal solution for a microservice architecture, right?
3
3.a. Any other ideas?
Keep in mind that the goal is to never lose the message, not even when service 2 is completely down or temporary removed… so no direct calls.
Thanks
I'd introduce a third (Stateful) service that holds a queue, 'service 3'.
Service 1 would enqueue the message. Service 3 would run an infinite loop, trying to deliver the message to service 2.
You could use the pub/sub package for this. Service 1 is the publisher, Service 2 is the subscriber.
(If you rely on an external queue system like Service Bus, you'll lower the overall availability of the system. Service Bus downtime would lead to messages being undeliverable.)
​
I think that there is never completely any solution that is 100% sure to never loose a message between two parties. Even if you had a service bus for instance in between two services, there is always the chance (possibly very small, but never null) that the service bus goes down, or that the communication to the service bus goes down. With that being said, there are of course models that are less likely to very seldom loose a message, but you can't completely get around the fact that you still have to handle errors in the client.
In fact, Service Fabric fault handling is mainly designed around clients retrying communication, rather than having the service or an intermediary do that. There are many reasons for this (I guess) but one is the nature of distributed, replicated, reliable services. If a service primary goes down, a replica picks up the responsibility, but it won't know what the primary was doing right at the moment it died (unless it replicated over it's state, but it might have died even before that). The only one that really knows what it wants to do in this scenario is the client. The client knows what it is doing and can react to different fault scenarios in te service. In Fabric Transport, most know exceptions that could "naturally" occur, such as the service dying or the network cable being cut of by the janitor are actuallt retried automatically. This includes re-resolving the address just in case the service primary was replaced with a secondary.
The same actually goes for a scenario where you introduce a third service or a service bus. What if the network goes down before the message has completely reached the service? In this case only the client knows that something went wrong and what it intended to send. What if it goes down after it reached the service but before the response was sent? In this case the client has to assume the message never reached and try to resend it. This is also why service methods are recommended to be idempotent - the same call can be made a number of times by the same client.
Even if you were to introduce a secondary part, like the service bus, there is still the same risk that the service bus goes down, or more likely, the network connecting to the service bus goes down. So, client needs to retry, and when it has retried a number of times, all it can do is put the message in a queue of failed messages or simply just log it, or throw an exception back to the original caller (in your scenario, the browser).
Ok, that's was me being pessimistic. But it could happen. All of the things above, its just that some are not very likely to happen. But they might happen.
On to your questions:
1) the problem with making a stateless service stateful is that you now have to handle partitions in your caller. You can put up Http listeners for stateful services, but you have to include the partition and replica information in the Uri, and that won't work with the load balancer, so in this case the browser has to select partition when calling the API. Not an ideal solution.
2) yes, you could do this, i.e. introduce something else in between that queues messages for you. There is nothing that says that a Service Bus or a Database is more reliable than a Stateful service with a reliable queue there, it's just up to you to go for what you are most comfortable with. I would go for a Stateful service, just so I can easily keep everything within my SF application. But again, this is not 100% protection from disgruntled janitor with scissors, for that you still need clients that can handle faults.
3) make sure you have a way of handling the errors (retry) and logging or storing the messages that fail (after retries) with the client (Service 1).
3.a) One way would be to have it store it localy on the node it is running and periodically (RunAsync for instance) try to re-run those failed messages. This might be dangerous in the scenario where the node it is running on is completely nuked and looses it data though, that data won't be replicated.
3.b) Another would be to use semantic logging with ETW and include enough data in the events to be able to re-create the message from the logged and build some feature, a manual UI perhaps, where you can re-run it from the logged information. Much like you would retry a failed message on an error queue in a service bus.
3.c) Store the failed messages to anything else (database, service bus, queue) that doesn't fail for the same reasons your communication with Service 2.
My main point here is (and I could maybe have started with that) is that there are plenty of scenarios where only the client knows enough to handle the situation. So, make sure you have a strategy for handling faults in your clients.
I have created Web API which allows messages to be sent to the Queue. My Web API is designed with CQRS and DDD in mind. I want my message consumer to always be waiting for any messages on the queue to receive. Currently the way its done, this will only read messages if I make a request to the API to hit the method.
Is there a way of either using console application or something that will always be running to consume messages at anytime given without having to make a request from the Web Api. So more of a automation task ?
If so, how do I go about with it i.e. if its console app how would I keep it always running (IIS ?) and is there way to use Dependency Injection as I need to consume the message then send to my repository which lives on separate solution. ?
or a way to make EasyNetQ run at start up ?
The best way to handle this situation in your case is to subscribe to bus events using AMPQ through EasyNetQ library. The recommended way of hosting it is by writing a windows service using topshelf library and subscribe to bus events inside that service on start.
IIS processes and threads are not reliable for such tasks as they are designed to be recycled on a regular basis which may cause some instabilities and inconsistencies in your application.
and is there way to use Dependency Injection as I need to consume the message then send to my repository which lives on separate solution.
It is better to create a separate question for this, as it is obviously off-topic. Also, it requires a further elaboration as it is not clear what specifically you are struggling with.
I have the following structure in my BPEL process.
-> Start process
Invoke web service ->
Do something
Invoke another web service ->
<- Send answer
This synchronous BPEL 2.0 process is implemented in a service I created with Oracle SOA Suite 11g. I want to alter the process to the following:
-> Start process
Invoke web service ->
<- Send answer
Do something
Invoke another web service ->
My problem is that the instance that calls my web service and therefor triggers this BPEL process only needs to know the result of the web service invoked first, the parts "Do something" and the invocation of the other web service can take several seconds and from time to time cause a time out on the consumer side. So the BPEL process has to send the answer after the first invoke but still has do to other stuff. I tried just putting the reply of the output right after the first invocation, but the web service still seems to wait for it to completely finish before continuing in sending the answer to the consumer. Probably because it's defined as a synchronous web service. But I guess I can't define it as an asynchronous service, because the answer of the first invoke is needed. Or do I have to create a second BPEL process which contains just the two last parts and make this one asynchronous? But keep in mind that in the "Do something" part I also need the answer from the first invocation.
Sorry for any errors, I'm not a native English speaker. And thanks for any help!
I just added another BPEL process. Now I have a synchronous and an asynchronous process, both started at the mediator in the composite. Probably not the perfect solution but it does the trick for me.
After creating multiple instances of a named pipe (using CreateNamedPipe()), I use CreateFile() to form a pipe client.
When the client writes a message to the pipe, only one server instance gets it.
Is there a way for the client to write a message to all instances?
When a client connects to an instance of a named pipe, the manner in which the operating system chooses which server instance to make the connection to is undocumented, as far as I know. However, empirically it appears to be done on a round robin basis.
If you are prepared to rely on undocumented behaviour which may change with service packs and QFE patches, your client can keep closing its pipe handle and calling CreateFile again to get a new one - each time it will attach to a new server instance of the pipe. However, there is a problem with this in that the client would not know when to stop. I suppose you could invent some mechanism involving a response from the server to break the loop but it is far from satisfactory. This isn't what named pipes were designed for.
The real purpose of multiple server instances of a pipe is to enable pipe servers to handle multiple clients concurrently. Usually, the same server process manages all the instances.
You really want to turn things around: what you think of as your client should be the server, and should create and manage the pipe. Processes which want notification would then connect as clients of the named pipe. This is a pattern which can be implemented quite easily using WCF, with a duplex contract and the NetNamedPipeBinding, if that's an option.
No, a pipe has two ends. Loop through the pipes. A mailslot supports broadcasts but delivery isn't guarantee.