I have the following structure in my BPEL process.
-> Start process
Invoke web service ->
Do something
Invoke another web service ->
<- Send answer
This synchronous BPEL 2.0 process is implemented in a service I created with Oracle SOA Suite 11g. I want to alter the process to the following:
-> Start process
Invoke web service ->
<- Send answer
Do something
Invoke another web service ->
My problem is that the instance that calls my web service and therefor triggers this BPEL process only needs to know the result of the web service invoked first, the parts "Do something" and the invocation of the other web service can take several seconds and from time to time cause a time out on the consumer side. So the BPEL process has to send the answer after the first invoke but still has do to other stuff. I tried just putting the reply of the output right after the first invocation, but the web service still seems to wait for it to completely finish before continuing in sending the answer to the consumer. Probably because it's defined as a synchronous web service. But I guess I can't define it as an asynchronous service, because the answer of the first invoke is needed. Or do I have to create a second BPEL process which contains just the two last parts and make this one asynchronous? But keep in mind that in the "Do something" part I also need the answer from the first invocation.
Sorry for any errors, I'm not a native English speaker. And thanks for any help!
I just added another BPEL process. Now I have a synchronous and an asynchronous process, both started at the mediator in the composite. Probably not the perfect solution but it does the trick for me.
Related
Consider an architecture like this:
API Gateway - responsible for aggregating services
Users microservice - CRUD operations on the user (users, addresses, consents, etc)
Notification microservice- sending email and SMS notifications
Security microservice - a service responsible for granting / revoking permissions to users and clients. For example, by connecting to Keycloak, it creates a user account with basic permission
Client - any application that connects to API Gateway in order to perform a given operation, e.g. user registration
Now, we would like to use Camunda for the entire process.
For example:
Client-> ApiGateway-> UsersMicroservice.Register-> SecurityMicroservice.AddDefaultPermition-> NotificationMicroservice.SendEmail
We would like to make this simplified flow with the use of e.g. Camunda.
Should the process start in UsersMicroservice.RegisterUser after receiving "POST api/users/" - that is UsersMicroservice.RegisterUser starts the process in Camunda and how does this endpoint know what specific process is to run in Camunda?
What if the BPMN process in Camunda is designed in such a way that immediately after entering the process there will be a Business Rule Task that will validate the Input and if there is no "Name", for example, it will interrupt the registration process? How UsersMicroservice will find out that the process has been interrupted and it should not perform any further standard operation like return this.usersService.Create (userInput);
Should the call to Camunda be in the Controller or rather in the Service layer?
How in the architecture as above, make a change to use Camunda to change the default Client-> UsersMicroservice-> UsersService-> Database flow, adding e.g. input validation before calling return this.usersService.Create (someInput);
If your intention is to let the process engine orchestrate the business process, then why not start the business process first? Either expose the start process API or a facade, which gets called by the API gateway when the desired business request should be served. Now let the process model decide which steps need to be taken to serve the request and deliver the desired result/business value. The process may start with a service task to create a user. However, like you wrote, the process may evolve and perform additional checks before the user is created. Maybe a DMN validates data. Maybe it is followed by a gateway which lead to a rejection path, a path that call an additional blacklist service, a path with a manual review, and the "happy path' with automated creation of the user. Whatever needs to happen, this is business logic, which you can make flexible by giving control to the process engine first.
The process should be started by the controller via a start process endppoint, before/not form UsersMicroservice.RegisterUser. You use a fixed process definition key to start. From here everything can be changed in the process model. You could potentially have an initial routing process ("serviceRequest") first which determines based on a process data ("request type") what kind of request it is ("createUser", "disableUser",...) and dispatches to the correct specific process for the given request ("createUser" -> "userCreationProcess").
The UsersMicroservice should be stateless (request state is managed in the process engine) and should not need to know. If the process is started first, the request may never reach UsersMicroservice. this.usersService.Create will only be called if the business logic in the process has determined that it is required - same for any subsequent service calls. If a subsequent step fails error handling can include retries, handling of a business error (e.g. "email address already exists") via an exceptional error path in the model (BPMNError), or eventually triggering a 'rollback' of operations already performed (compensation).
Controller - see above. The process will call the service if needed.
Call the process first, then let it decide what needs to happen.
Problem:
Suppose there are two services A and B. Service A makes an API call to service B.
After a while service A falls down or to be lost due to network errors.
How another services will guess that an outbound call from service A is lost / never happen? I need some another concurrent app that will automatically react (run emergency code) if service A outbound CALL is lost.
What are cutting-edge solutions exist?
My thoughts, for example:
service A registers a call event in some middleware (event info, "running" status, timestamp, etc).
If this call is not completed after N seconds, some "call timeout" event in the middleware automatically starts the emergency code.
If the call is completed at the proper time service A marks the call status as "completed" in the same middleware and the emergency code will not be run.
P.S. I'm on Java stack.
Thanks!
I recommend to look into patterns such as Retry, Timeout, Circuit Breaker, Fallback and Healthcheck. Or you can also look into the Bulkhead pattern if concurrent calls and fault isolation are your concern.
There are many resources where these well-known patterns are explained, for instance:
https://www.infoworld.com/article/3310946/how-to-build-resilient-microservices.html
https://blog.codecentric.de/en/2019/06/resilience-design-patterns-retry-fallback-timeout-circuit-breaker/
I don't know which technology stack you are on but usually there is already some functionality for these concerns provided already that you can incorporate into your solution. There are libraries that already take care of this resilience functionality and you can, for instance, set it up so that your custom code is executed when some events such as failed retries, timeouts, activated circuit breakers, etc. occur.
E.g. for the Java stack Hystrix is widely used, for .Net you can look into Polly .Net to make use of retry, timeout, circuit breaker, bulkhead or fallback functionality.
Concerning health checks you can look into Actuator for Java and .Net core already provides a health check middleware that more or less provides that functionality out-of-the box.
But before using any libraries I suggest to first get familiar with the purpose and concepts of the listed patterns to choose and integrate those that best fit your use cases and major concerns.
Update
We have to differentiate between two well-known problems here:
1.) How can service A robustly handle temporary outages of service B (or the network connection between service A and B which comes down to the same problem)?
To address the related problems the above mentioned patterns will help.
2.) How to make sure that the request that should be sent to service B will not get lost if service A itself goes down?
To address this kind of problem there are different options at hand.
2a.) The component that performed the request to service A (which than triggers service B) also applies the resilience patterns mentioned and will retry its request until service A successfully answers that it has performed its tasks (which also includes the successful request to service B).
There can also be several instances of each service and some kind of load balancer in front of these instances which will distribute and direct the requests to an available instance (based on regular performed healthchecks) of the specific service. Or you can use a service registry (see https://microservices.io/patterns/service-registry.html).
You can of course chain several API calls after another but this can lead to cascading failures. So I would rather go with an asynchronous communication approach as described in the next option.
2b.) Let's consider that it is of utmost importance that some instance of service A will reliably perform the request to service B.
You can use message queues in this case as follows:
Let's say you have a queue where jobs to be performed by service A are collected.
Then you have several instances of service A running (see horizontal scaling) where each instance will consume the same queue.
You will use message locking features by the message queue service which makes sure that as soon one instance of service A reads a message from the queue the other instances won't see it. If service A was able to complete it's job (i.e. call service B, save some state in service A's persistence and whatever other tasks you need to be included for a succesfull procesing) it will delete the message from the queue afterwards so no other instance of service A will also process the same message.
If service A goes down during the processing the queue service will automatically unlock the message for you and another instance A (or the same instance after it has restarted) of service A will try to read the message (i.e. the job) from the queue and try to perform all the tasks (call service B, etc.)
You can combine several queues e.g. also to send a message to service B asynchronously instead of directly performing some kind of API call to it.
The catch is, that the queue service is some highly available and redundant service which will already make sure that no message is getting lost once published to a queue.
Of course you also could handle jobs to be performed in your own database of service A but consider that when service A receives a request there is always a chance that it goes down before it can save that status of the job to it's persistent storage for later processing. Queue services already address that problem for you if chosen thoughtfully and used correctly.
For instance, if look into Kafka as messaging service you can look into this stack overflow answer which relates to the problem solution when using this specific technology: https://stackoverflow.com/a/44589842/7730554
There is many way to solve your problem.
I guess you are talk about 2 topics Design Pattern in Microservices and Cicruit Breaker
https://dzone.com/articles/design-patterns-for-microservices
To solve your problem, Normally I put a message queue between services and use Service Discovery to detect which service is live and If your service die or orverload then use Cicruit Breaker methods
I am working on an application where i have separated out two different XPC services from the main application. I want a XPC service to communicate with other XPC service which will do some processing and will return the data back to first service and that first service will do its own processing and then will give data back to the main application. I even tried this but communicating between the services give error that "could not communicate with helper application".
My question is that either this is possible or not? If yes that what is required?
Any help would be appreciated.
Yes, this is possible, but not at all obvious. I asked questions about this exact thing on and off for a year before an obscure hint from an Apple engineer led me to stumble across the answer.
The trick is that you need to transfer the NSXPCListenerEndpoint of one process to another process. That second process can then use that endpoint object to create a direct connection with the first process. The catch is that, while NSXPCListenerEndpoint is NSCoding compliant, it can only be encoded through an existing XPC connection, which makes this problem sound like a catch-22 (you can't transfer the endpoint until you've created a connection, and you can't create a connection until you have the endpoint).
The solution ("trick") is you need an intermediating process (let's call it "cornerstone") that already has an XPC connections that can exchange endpoints between the other two processes.
In my application I ended up creating a daemon process which acts as my cornerstone, but I think you could do it directly in your application. Here's what you need to do:
Create an application with two XPC services, "A" and "B"
In "A" get the listener object for the process: either get the service listener created automatically (listener = NSXPCListener.serviceListener) or create a dedicated, anonymous, listener for the second process (using listener = NSXPCListener.anonymousListener).
Get the endpoint of the listener (listener.endpoint)
The application should ask "A" for its endpoint.
The application can then launch "B" and, using XPC again, pass the endpoint it got from "A" to "B".
"B" can now use the endpoint object it obtained from "A" (via the application) to create a direct connection to "A" using [[NSXPCConnection alloc] initWithListenerEndpoint:aEndpoint]].
So I have found that two processes are inevitably going to be unable to communicate to the same XPCService. That is because if you try to launch an XPCService, it will be a unique process to the launcher. And as far as I can tell, you can only communicate with an XPCService that your process launched.
So I believe your second XPCService will be unable to "launch" the first XPCService, and therefore will be unable to communicate with it.
The best you can probably do is have your second XPCService communicate back to your main application process, which then communicates to the first XPCService.
You could do something like:
[[self.firstXPCConnection remoteObjectProxy] getSomeString:^(NSString *myString) {
[[self.secondXPCConnection remoteObjectProxy] passSomeString:myString];
}];
Though disclaimer, I haven't tried this. But the best I can help you with the knowledge I have
I've been doing some research into BPM solutions and am looking to hopefully use jBPM to achieve my goal. I am aware it is possible to start a process instance with an event signal sent to the process engine, but I would like to be able to interact with process instances currently running in that engine WITHOUT knowing their instance ID.
I am aiming to achieve this in an interrupt fashion by sending an event to the process engine, with business data, that will match to the process instance containing that specific match in business data (for instance a customer number unique to a process instance).
I have not yet been able to figure out how to do this, another of my goals is to expose this via REST/SOAP, and I am aware that this functionality is NOT currently implemented in the jBPM5 console REST interface.
How would I go about doing this, what are the standard patterns for doing so, or what other process engines should I be looking at to achieve this?
Yeah, you can achieve that with jbpm and I would recommend you to check jbpm6 CR2..
In order to do what you need you can start multiple processes inside a KieSession and then send your customer as the payload of your event. Only the process that has that customer will catch the event ( if it's modeled correctly with the catch event node that filter by customer).
The Rest endpoints are already there in jbpm6.
Hope it helps
I have the problem that I have to run very long running processes on my Webservice and now I'm looking for a good way to handle the result. The scenario : A user executes such a long running process via UI. Now he gets the message that his request was accepted and that he should return some time later. So there's no need to display him the status of his request or something like this. I'm just looking for a way to handle the result of the long running process properly. Since the processes are external programms, my application server is not aware of them. Therefore I have to wait for these programms to terminate. Of course I don't want to use EJBs for this because then they would block for the time no result is available. Instead I thought of using JMS or Spring Batch. Does anyone ever had the same problem or an advice which solution would be better?
It really depends on what forms of communication your external programs have available. JMS is a very good approach and immediately available in your app server but might not be the best option if your external program is a long running DB query which dumps the result in a text file...
The main advantage of Spring Batch over "just" using JMS as an aynchronous communcations channel is the transactional properties, allowing the infrastructure to retry failed jobs, group jobs together and such. Without knowing more about your specific setup, it is hard to give detailed advise.
Cheers,
I had a similar design requirement, users were sending XML files and I had to generate documents from them. Using JMS in this case is advantageous since you can always add new instances of these processes which can consume and execute the jobs in parallel.
You can use a timer task to check status or monitor these processes. Also, you can publish a message to a JMS queue once the processes are completed.