Camunda. Access to variable from delegate code - spring

Is it possible to somehow get the variables of the process in which it is currently running from the delegate's code (without forwarding a link to the DelegateExecution through a bunch of methods), or at least get the ProcessDefinitionId?
Something like
#Autowired
CurrentDelegateExecution currentDelegateExecution
or
runtimeService.getCurrentProcess()

I think there is a misunderstanding of how Camunda works, here. Such a thing as a "process in which it is currently running" does not exist, for any process model may be instantiated any number of times, creating multiple process instances.
You can however, query all active process instances using the runtimeService. From there on you may query variables or other specifics. For example:
runtimeService.createProcessInstanceQuery().active().list().stream()
.map(processInstance -> runtimeService.getVariables(processInstance.getId()))
.collect(Collectors.toList());
This would leave you with a List of variable-maps.

Related

Restful triggering of Camunda process definitions from nodejs

I’m a beginner at Camunda/BPMN and I want to use it to control what is going on in nodejs, mostly likely using a REST API, at least for now. (Unless folks have a better idea for how nodejs should talk to Camunda.) My goal is to deliver systems where non-programmers can update the business logic in very practical ways.
I'd like to trigger the start of perhaps more-than-one process by sending a REST message, say to reflect that "a new insurance policy has been sold" and that might trigger the instantiation of say 2 processes on Monday but perhaps on Tuesday we add a third and now the same REST API call should now trigger more activity on Wednesday. (I figure it is better for nodejs to know about events but not about the process definitions. After all, my goal is to use Camunda as a sort of business logic server for my application. The less the nodejs code needs to know, the better.)
Which REST API should I be using to express the message that, say "a new insurance policy has been sold"? When I look at:
https://docs.camunda.org/manual/develop/reference/rest/signal/post-signal/
I find it very confusing. What should "name" match in the biz process definitions? I assume I don't need an executionId? I assume I can leave out tenantId?
Would some string in the message match the ID of a start event in one or more process definitions (or what has to match what)?
When I look at a process, is there an easy way to tell what variables I need to supply to start that process running?
Should I perhaps avoid using this event-oriented style of kicking off processes and just use the POST /process-definition/key/{key}/start? It would seem to me to be better form to trigger activity with events or signals or something like that rather than to have my nodejs code know about the specific process definition by name.
Should I be using events or signals in this case?
I gather that the start event should not be a "None Start Event" but I'm not clear on what type of start event TO use if I want automatic triggering based on events or signals or something? Would a "Non-interrupting - Message Start Event" be the right sort? I'm finding this confusing.
Once I have triggered the process to start, what does nodejs need to send to step the process forward from one task in that instance to the next?
Thanks!
In order to instantiate a new workflow instance you have the following possibilities:
Start exactly one instance:
Start a workflow instance by its known "key": https://docs.camunda.org/manual/develop/reference/rest/process-definition/post-start-process-instance/
Start a workflow by a message start event: https://docs.camunda.org/manual/develop/reference/rest/message/post-message/. A message can only start one specific workflow instance, it is not allowed that this is not a unique relationship. The message start event is the one you have to use in your BPMN process model. See also https://docs.camunda.org/manual/develop/reference/bpmn20/events/message-events/. This might indeed be the better approach to make your client independent of the process definition key.
Start multiple instances:
- Start a workflow instance by a BPMN signal event: https://docs.camunda.org/manual/develop/reference/rest/signal/post-signal/. The signal name could start many instances as once.
The name of the message or name of signal would be configured in the BPMN model. Both could work for your use case.
Once a process instance is started it will move automatically execute the next steps.
Probably following this example (https://blog.bernd-ruecker.com/use-camunda-without-touching-java-and-get-an-easy-to-use-rest-based-orchestration-and-workflow-7bdf25ac198e) step by step can give you some better idea?

Hystrix: Doing network call in getFallBack()

I've been playing around with Netflix OSS Hystrix and I am now exploring different configurations and possibilities to include it in my project. Among other things, my application needs to do network calls in HystrixCommand.getFallBack() ...
Now, I read that it is best-practice NOT to do network calls there and instead provide some generic answer (see Hystrix Wiki) and if it is really necessary to do this, one should use HystrixCommand or HystrixObservableCommand.
My question is, if I use HystrixCommand should I invoke it e.g., with HystrixCommand.run() or HysrixCommand.queue() or some other option?
Also, in my logs I've noticed that getFallBack() can have different calling threads (e.g., Hystrix-Timer, I guess this depends who interrupted the run method). Here I would like to know how shell calling HystrixCommand.run() from the fallback affect my performance, since the calling thread will be alive and blocked until that command finishes?
EDIT: With the fresh eyes on the problem, I am now thinking that the "generic answer" (mentioned above) could be some form of Promise i.e., CompletableFuture<T> in Java terminology. So returning the promise from the HystrixCommand.run() would allow the calling thread (Hystrix internal) to return immediately, thus releasing it. However now I am stuck on implementing this behavior. Any ideas?
Thanks a lot for any help!
Use a HystrixCommand's execute method. Example:
#Override
protected YourReturnType getFallback() {
return new MyHystrixFallbackCommand().execute();
}
If you want to work with async "promises" then you should probably implement a HystrixObservableCommand.

How to handle saving on child context but the objected is already deleted in parent context?

I have core data nested contexts setup. Main queue context for UI and saving to SQLite persistent store. Private queue context for syncing data with the web service.
My problem is the syncing process can take a long time and there are the chance that the syncing object is deleted in the Main queue context. When the private queue is saved, it will crash with the "Core Data could not fulfill faulted" exception.
Do you have any suggestion on how to check this issue or the way to configure the context for handle this case?
There is no magic behind nested contexts. They don't solve a lot of problems related to concurrency without additional work. Many people (you seem to be one of those people) expect things to work out of the box which are not supposed to work. Here is a little bit of background information:
If you create a child context using the private queue concurrency type then Core Data will create a queue for this context. To interact with objects registered at this context you have to use either performBlock: or performBlockAndWait:. The most important thing those two methods do is to make sure to invoke the passed block on the queue of the context. Nothing more - nothing less.
Think about this for a moment in the context of a non Core Data based application. If you want to do something in the background you could create a new queue and schedule blocks to do work on that queue in the background. If your job is done you want to communicate the result of the background operations to another layer inside your app logic. What happens when the user deleted the object/data in the meantime which is related to the results from the background operation? Basically the same: A crash.
What you experience is not a Core Data specific problem. It is a problem you have as soon you introduce concurrency. What you need is to think about a policy or some kind of contract between your child and parent contexts. For example, before you delete the object from the root context you should cancel all of the operations/blocks which are running on other queues and wait for the cancellation to finish before you actually delete the object.

Ninject binding setup

I have a weird case where I'm using NInject and I'm not sure how to proceed. Our repositories in this instance are custom-written sql generators, and not using linq to sql; nHibernate, etc.
In order to utilize code within the system, we're, within the implementation of the repositories, injecting the required repositories to build child objects (e.g., an "order" object needs to get its "orderdetails" objects and assign it to the order before returning it to the calling area in the system, so in our OrderRepository, we have an [Inject]IOrderDetailRepository OrderDetailRepo { get; set; }).
What the hangup comes in is up until this point, we've been able to keep everything configured InRequestScope(). Now we are using a Parallel ForEach loop and after an iteration completes, we're firing an event over to a singleton scoped event handler to update the database. We'd do the update within the loop, but have been trying to avoid tying the loop with a particular area in the system, since there could be many areas that could use this loop.
So, we need to figure out when we call into this singleton scoped event handler, how to have ninject configured that all IRepositories (IOrderRepository and IOrderDetailRepository) used within that singleton event handler die off and init immediately when they are used.
Any Hints?
So, I have "A" resolution, but I'm not happy with the resolution for this... what I did was to build-up a ChildKernel and then set the bindings the way that I wanted to have them. Although this works as expected, it feels like some serious "code smell". Would love to see a better way of handling this scenario, if one would exist.

TDD and Service (class what do something but nothing returns back)

I'm trying to follow TDD (i'm newbee) during development of Service class which builds tasks passed by Service clients. The built objects are then passed to other systems. In other words this service takes tasks but returns nothing as a result - it passes built tasks to other Services.
So I'm wondering how can I write test for it because there is nothing to assert.
I'm thinking about using mocks to track interactions inside the service but I'm a little bit afraid of using mocks because I will be tied up with internal implementarion of the service.
Thanks all of you in advance!
There's no problem using mocks for this, since you are effectively going to be mocking the external interface of the components that are used internally in the component. This is really what mocking is intended for, and sound like a perfect match for your use case.
When doing TDD it should also allow you to get those quick turnaround cycles that are considered good practice, since you can just create mocks of those external services. These mocks will easily allow you to write another failing test.
You can consider breaking it up in a couple classes. One responsible to build the list of tasks that will be executed, and the other responsible to execute the list of tasks it is handed. This way you can directly test the code that build the lists of tasks.
That said, I want to add a sample I posted on another question, regarding how I view the TDD process when external systems are involved.
Lets say you have to check whether
some given logic sends an email, logs
the info on a file, saves data on the
database, and calls a web service (not
all at once I know, but you start
adding tests for each of those). On
each test you don't want to hit the
external systems, what you really want
to test is if the logic will make the
calls to those systems that you are
expecting it to do. So when you write
a test that checks that an email is
sent when you create an user, what you
test is if the logic calls the
dependency that does that. Notice that
you can write these tests and the
related logic, without actually having
to implement the code that sends the
email (and then having to access the
external system to know what was sent
...). This will help you focus on the
task at hand and help you get a
decoupled system. It will also make it
simple to test what is being sent to
those systems.
Not sure what language you're using so in psuedo-code it could be something like this:
when_service_is_passed_tasks
before_each_test
mockClients = CreateMocks of 3 TaskClients
tasks = GetThreeTasks()
myService = new TaskRouter(mockClients)
sends_all_to_the_appropriate_clients
tasks = GetThreeTasks()
myService.RouteTaks(tasks)
Assert mockClients[0].AcceptTask(tasks[0]) was called
Assert mockClients[1].AcceptTask(tasks[1]) was called
Assert mockClients[2].AcceptTask(tasks[2]) was called
if_one_routing_fails_all_fail
tasks = GetTasksWhereOneIsFailing()
myService.RouteTaks(tasks)
Assert mockClients[0].AcceptTask(*) was not called
Assert mockClients[1].AcceptTask(*) was not called
Assert mockClients[2].AcceptTask(*) was not called

Resources