How to update Oracle BPMN flow diagram during execution - oracle

I'm working with Oracle BPMN (Fusion middleware), using JDeveloper to create BPMN processes, and writing Java code for a custom page to display the flow diagram for running processes. The problem being encountered is that the BPMN diagrams do not display/update until they hit certain trigger events (apparently asynchronous event points). So in many cases the diagrams do not even show up in a query until the BPMN process completes. Note we don't normally have user input tasks, which qualify as async events and also result in the diagram then showing up.
Our team has talked to Oracle about it and their solution was to wrap every BPMN call (mostly service calls) in asynchronous BPEL wrappers, so that the BPMN calls an async request/response (thus as two actions) that calls the service. Doing this does work, but it adds a huge overhead to the effort of developing BPMN processes, because every action has to be wrapped.
So I'm wondering if anyone else has explored or potentially solved this problem.
Some code snippets of what we're doing (partial code only):
To get the running instance IDs:
List<Column> columns = new ArrayList<Column>();
columns.add(...); // repeated for all relevant fields
Ordering ...
Predicate ...
IInstanceQueryInput input = new IInstanceQueryInput();
List<IProcessInstance> instances = client.getInstanceQueryService().queryProcessInstances(context, columns, predicate, ordering, input);
// however, instances doesn't return the instance until the first async event, or until completion
After that the AuditProcessDiagrammer is used to get the flow diagram, and DiagramEvents uesd to update / highlight the flow in progress. The instanceId does show up in the Oracle fusion control panel, so it must at least potentially be available. But trying to get an image for it results in a null image:
IProcessInstance pi = client.getInstanceQueryService().getProcessInstance(context, instance);
// HERE --> pi is null until the image is available (so the rest of this isn't run)
String compositeDn = pi.getSca().getCompositeDN();
String componentName = pi.getSca().getComponentName();
IProcessModelPackage package = client.getProcessModelService().getProcessModel(context, compositeDn, componentName);
ProcessDiagramInfo info = new ProcessDiagramInfo();
info.setModelPackage(package);
AuditProcessDiagrammer dg = new AuditProcessDiagrammer(info.getModelPackage().getProcessModel().getProcess());
List<IAuditInstance> audits = client.getInstanceQueryService().queryAuditInstanceByProcessId(context, instance);
List<IDiagramEvent> events = // function to get these
dg.highlight(events);
String base64image = dg.getImage();
See the HERE --> part. That's where I need instance to be valid.
If there are good alternatives (setting, config, etc...) that others have successfully used, I'd love to hear it. I'm really not interested in strange workarounds (already have that in the BPEL wrapper). I'm looking for a solution that allows the BPMN process flow to remain simple. Thanks.

Related

Issue with Synchronisation of Agents from different types in a shared discrete space projection

I have an issue regarding the synchronisation of different agents.
So I have a shared context with a BaseAgent class as the tutorial suggested for the case where we have more than 1 agent type in the same context. Then I have 4 more agent classes which are children of the Base Agent Class. For each of them I have the necessary serialisable agent packages and in my model class I also have specific package receivers and providers for each one of them.
All those agents are sharing a discrete spatial projection of the form:
repast::SharedDiscreteSpace<BaseAgentClass, repast::WrapAroundBorders, repast::SimpleAdder< BaseAgentClass > >* discreteSpace;
Three of my 4 agent types can move around and I have implemented their moves. However, they can move from one process to another and I need to use the 4 synchronisation statements which were introduced in the RepastHPC tutorial HPC:D03, Step 02: Agents Moving in Space.
The thing however is that I am not certain how to actually synchronise them, since the agents would need their specific providers, receivers and serialisable packages in order to be copied into the other process correctly. I tried doing the following:
discreteGridSpace->balance();
repast::RepastProcess::instance()->synchronizeAgentStatus<BaseAgentClass, SpecificAgentPackage, SpecificAgentPackageProvider, SpecificAgentPackageReceiver>(context, *specificAgentProvider, *specificAgentProvider, *specificAgentReceiver);
repast::RepastProcess::instance()->synchronizeProjectionInfo<BaseAgentClass, SpecificAgentPackage, SpecificAgentPackageProvider, SpecificAgentPackageReceiver>(context, *specificAgentProvider, *specificAgentProvider, *specificAgentReceiver);
repast::RepastProcess::instance()->synchronizeAgentStates< SpecificAgentPackage, SpecificAgentPackageProvider, SpecificAgentPackageReceiver >(* specificAgentProvider, * specificAgentReceiver);
However, when running I get the following error:
===================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= PID 3224 RUNNING AT Aleksandars-MBP
= EXIT CODE: 11
= CLEANING UP REMAINING PROCESSES
= YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
===================================================================================
YOUR APPLICATION TERMINATED WITH THE EXIT STRING: Segmentation fault: 11 (signal 11)
This typically refers to a problem with your application.
Please see the FAQ page for debugging suggestions
So I am not certain how to actually synchronise the agents for each specific agent type, since they are all sharing the same context and spatial projection with the BaseAgentClass.
Thank you for the help in advance!
The intention here is that you'd have a single package that can be used for all the agent types. For example, in the Zombies demo model, the package has entries for the agent id components, and also infected and infectionTime. These last two only apply to the Human agents and not to the Zombies.
The methods where you provide and receive content should check for the agent type and take the appropriate action. For example, in the Zombies Model we have
void ZombieObserver::provideContent(RelogoAgent* agent, std::vector<AgentPackage>& out) {
AgentId id = agent->getId();
AgentPackage content = { id.id(), id.startingRank(), id.agentType(), id.currentRank(), 0, false };
if (id.agentType() == humanType) {
Human* human = static_cast<Human*> (agent);
content.infected = human->infected();
content.infectionTime = human->infectionTime();
}
out.push_back(content);
}
Here you can see we fill the AgentPackage with some defaults for infected and infectionTime and then update those if the agent is of the Human type. This is a ReLogo style model, so some of the details might be different but hopefully it's clear that there is a single package type that can handle all the agent types, and that you use the agent type to distinguish between types in your provide methods.

Axon - Cannot emit query update in different microservice

I'm bothering with situation when I want to emit query update via queryUpdateEmitter but in different module (microservice). I have application built upon microservices and both are connected to the same Axon Server. First service creates subscriptionQuery, and sends some commands. After a while (through few commands and events) second service handles some event, and emits update for firstly subscribed query. Unfortunately it seems like this emit doesn't get to subscriber. Queries are exactly the same and sits in the same packages.
Subscription:
#GetMapping("/refresh")
public Mono<MovieDTO> refreshMovies() {
commandGateway.send(
new CreateRefreshMoviesCommand(UUID.randomUUID().toString()));
SubscriptionQueryResult<MovieDTO, MovieDTO> refreshedMoviesSubscription =
queryGateway.subscriptionQuery(
new GetRefreshedMoviesQuery(),
ResponseTypes.instanceOf(MovieDTO.class),
ResponseTypes.instanceOf(MovieDTO.class)
);
return refreshedMoviesSubscription.updates().next();
}
Emitter:
#EventHandler
public void handle(DataRefreshedEvent event) {
log.info("[event-handler] Handling {}, movieId={}",
event.getClass().getSimpleName(),
event.getMovieId());
queryUpdateEmitter.emit(GetRefreshedMoviesQuery.class, query -> true,
Arrays.asList(
MovieDTO.builder().aggregateId("as").build(),
MovieDTO.builder().aggregateId("be").build()));
}
This situation is even possible in the newest version of Axon? Similar configuration but within one service is working as expected.
#Edit
I have found a workardound for this situation:
Second service instead of emitting query via queryUpdateEmitter, publishes event with list of movies
First service handles this event and then emits update via queryUpdateEmitter
But still I'd like to know if there is a way to do this using queries only, because it seems natural to me (commandGateways/eventGateways works as expected, queryUpdateEmitter is the exception).
This follows from the implementation of the QueryUpdateEmitter (regardless of using Axon Server yes/no).
The QueryUpdateEmitter stores a set of update handlers, referencing the issued subscription queries. It however only maintains the issued subscription queries handled by the given JVM (as the QueryUpdateEmitter implementation is not distributed).
It's intent is to be paired in the component (typically a Query Model "projector") which answers queries about a given model, updates the model and emits those updates.
Hence, placing the QueryUpdateEmitter operations in a different (micro)service as where the query is handled will not work.

using timer in background worker in windows phone

I am develping an app which load some url, parse them, keep them into sqlite db and the UI will read the saved data and show them in controls. This progress should be done in almost an infinit loop. For having fast response i plan to read the data from db in main thread and have an other thread (background worker) to load the data and insert it into db. Is it logical and possible to run read and write process in dispatchertimer, one timer in main thread and the other inside the background worker? and how? Or does anyone have better idea?
main thread:
DispatcherTimer _Timer1 = new DispatcherTimer();
_Timer1.Interval = _Interval;
_Timer1.Tick += _Timer1_Tick;
void _Timer1_Tick(object sender, EventArgs e)
{
// read data from db and show in controls
}
secondary thread:
private void bw_DoWork(object sender, DoWorkEventArgs e)
{
BackgroundWorker worker = sender as BackgroundWorker;
DispatcherTimer _Timer2 = new DispatcherTimer();
_Timer2.Interval = _Interval;
_Timer2.Tick += _Timer2_Tick;
}
void _Timer2_Tick(object sender, EventArgs e)
{
// write data into db
}
}
What you're planning to do wont work.
Both your _Timer1_Tick and _Timer2_Tick will run in the UI thread. If you perform some long-running operations there, it'll hang the UI.
I don't get it, why do you need timers at all? Using timers for anything else but measuring time intervals is rarely a good strategy. You could e.g. run your update process in the infinite loop in background, as soon as it put new data in the DB you call Dispatcher.BeginInvoke (passing any data you want) to notify your UI thread it should update itself with the newly available data.
And by the way, for the tasks like "send HTTP request, wait response, parse, store, repeat", the new async/await feature is a natural choice. For WP7 the functionality is available as "Async CTP" redistributable package for Visual Studio 2010, for WP8 it's already integrated into the framework. There're some compatibility issues between the 2, though.
load some url, parse them, keep them into sqlite db and the UI will read the saved data and show them in controls
Please don't do that. Don't create your own thread management system, just don't. I'm not saying it won't work, but it'll most likely backfire in the most horrendous and inexplicable ways. Like for example using a DisptacherTImer completely exploding in your face since it runs on the UI thread. If you really want to use threading considering ThreadPool.QueueUserWorkItem() or Task.Run() to start fire-and-forget actions.
Your workflow is also just strange, I don't get why you need to write data you already have to a DB, then read it back and only then use it. Won't it make more sense to use the deserialized data to sequentially write it to the DB and present it to the UI? Instead of doing the needless loop of involving Disk I/O considering you already have the data?
Have you considered using Messaging in your app? It's a pretty well known MVVM pattern implemented both in MVVM Light as the Messenger class and in PRISM as the EventAggregator. It seems to me that your system has a Message for "new data available from service" and that message has two subscribers: writing to a DB and updating the UI.

Relation between command handlers, aggregates, the repository and the event store in CQRS

I'd like to understand some details of the relations between command handlers, aggregates, the repository and the event store in CQRS-based systems.
What I've understood so far:
Command handlers receive commands from the bus. They are responsible for loading the appropriate aggregate from the repository and call the domain logic on the aggregate. Once finished, they remove the command from the bus.
An aggregate provides behavior and an internal state. State is never public. The only way to change state is by using the behavior. The methods that model this behavior create events from the command's properties, and apply these events to the aggregate, which in turn call an event handlers that sets the internal state accordingly.
The repository simply allows loading aggregates on a given ID, and adding new aggregates. Basically, the repository connects the domain to the event store.
The event store, last but not least, is responsible for storing events to a database (or whatever storage is used), and reloading these events as a so-called event stream.
So far, so good.
Now there are some issues that I did not yet get:
If a command handler is to call behavior on a yet existing aggregate, everything is quite easy. The command handler gets a reference to the repository, calls its loadById method and the aggregate is returned. But what does the command handler do when there is no aggregate yet, but one should be created? From my understanding the aggregate should later-on be rebuilt using the events. This means that creation of the aggregate is done in reply to a fooCreated event. But to be able to store any event (including the fooCreated one), I need an aggregate. So this looks to me like a chicken-and-egg problem: I can not create the aggregate without the event, but the only component that should create events is the aggregate. So basically it comes down to: How do I create new aggregates, who does what?
When an aggregate triggers an event, an internal event handler responses to it (typically by being called via an apply method) and changes the aggregate's state. How is this event handed over to the repository? Who originates the "please send the new events to the repository / event store" action? The aggregate itself? The repository by watching the aggregate? Someone else who is subscribed to the internal events? ...?
Last but not least I have a problem understanding the concept of an event stream correctly: In my imagination, it's simply something like an ordered list of events. What's of importance is that it's "ordered". Is this right?
The following is based on my own experience and my experiments with various frameworks like Lokad.CQRS, NCQRS, etc. I'm sure there are multiple ways to handle this. I'll post what makes most sense to me.
1. Aggregate Creation:
Every time a command handler needs an aggregate, it uses a repository. The repository retrieves the respective list of events from the event store and calls an overloaded constructor, injecting the events
var stream = eventStore.LoadStream(id)
var User = new User(stream)
If the aggregate didn't exist before, the stream will be empty and the newly created object will be in it's original state. You might want to make sure that in this state only a few commands are allowed to bring the aggregate to life, e.g. User.Create().
2. Storage of new Events
Command handling happens inside a Unit of Work. During command execution every resulting event will be added to a list inside the aggregate (User.Changes). Once execution is finished, the changes will be appended to the event store. In the example below this happens in the following line:
store.AppendToStream(cmd.UserId, stream.Version, user.Changes)
3. Order of Events
Just imagine what would happen, if two subsequent CustomerMoved events are replayed in the wrong order.
An Example
I'll try to illustrate the with a piece of pseudo-code (I deliberately left repository concerns inside the command handler to show what would happen behind the scenes):
Application Service:
UserCommandHandler
Handle(CreateUser cmd)
stream = store.LoadStream(cmd.UserId)
user = new User(stream.Events)
user.Create(cmd.UserName, ...)
store.AppendToStream(cmd.UserId, stream.Version, user.Changes)
Handle(BlockUser cmd)
stream = store.LoadStream(cmd.UserId)
user = new User(stream.Events)
user.Block(string reason)
store.AppendToStream(cmd.UserId, stream.Version, user.Changes)
Aggregate:
User
created = false
blocked = false
Changes = new List<Event>
ctor(eventStream)
isNewEvent = false
foreach (event in eventStream)
this.Apply(event, isNewEvent)
Create(userName, ...)
if (this.created) throw "User already exists"
isNewEvent = true
this.Apply(new UserCreated(...), isNewEvent)
Block(reason)
if (!this.created) throw "No such user"
if (this.blocked) throw "User is already blocked"
isNewEvent = true
this.Apply(new UserBlocked(...), isNewEvent)
Apply(userCreatedEvent, isNewEvent)
this.created = true
if (isNewEvent) this.Changes.Add(userCreatedEvent)
Apply(userBlockedEvent, isNewEvent)
this.blocked = true
if (isNewEvent) this.Changes.Add(userBlockedEvent)
Update:
As a side note: Yves' answer reminded me of an interesting article by Udi Dahan from a couple of years ago:
Don’t Create Aggregate Roots
A small variation on Dennis excellent answer:
When dealing with "creational" use cases (i.e. that should spin off new aggregates), try to find another aggregate or factory you can move that responsibility to. This does not conflict with having a ctor that takes events to hydrate (or any other mechanism to rehydrate for that matter). Sometimes the factory is just a static method (good for "context"/"intent" capturing), sometimes it's an instance method of another aggregate (good place for "data" inheritance), sometimes it's an explicit factory object (good place for "complex" creation logic).
I like to provide an explicit GetChanges() method on my aggregate that returns the internal list as an array. If my aggregate is to stay in memory beyond one execution, I also add an AcceptChanges() method to indicate the internal list should be cleared (typically called after things were flushed to the event store). You can use either a pull (GetChanges/Changes) or push (think .net event or IObservable) based model here. Much depends on the transactional semantics, tech, needs, etc ...
Your eventstream is a linked list. Each revision (event/changeset) pointing to the previous one (a.k.a. the parent). Your eventstream is a sequence of events/changes that happened to a specific aggregate. The order is only to be guaranteed within the aggregate boundary.
I almost agree with yves-reynhout and dennis-traub but I want to show you how I do this. I want to strip my aggregates of the responsibility to apply the events on themselves or to re-hydrate themselves; otherwise there is a lot of code duplication: every aggregate constructor will look the same:
UserAggregate:
ctor(eventStream)
foreach (event in eventStream)
this.Apply(event)
OrderAggregate:
ctor(eventStream)
foreach (event in eventStream)
this.Apply(event)
ProfileAggregate:
ctor(eventStream)
foreach (event in eventStream)
this.Apply(event)
Those responsibilities could be left to the command dispatcher. The command is handled directly by the aggregate.
Command dispatcher class
dispatchCommand(command) method:
newEvents = ConcurentProofFunctionCaller.executeFunctionUntilSucceeds(tryToDispatchCommand)
EventDispatcher.dispatchEvents(newEvents)
tryToDispatchCommand(command) method:
aggregateClass = CommandSubscriber.getAggregateClassForCommand(command)
aggregate = AggregateRepository.loadAggregate(aggregateClass, command.getAggregateId())
newEvents = CommandApplier.applyCommandOnAggregate(aggregate, command)
AggregateRepository.saveAggregate(command.getAggregateId(), aggregate, newEvents)
ConcurentProofFunctionCaller class
executeFunctionUntilSucceeds(pureFunction) method:
do this n times
try
call result=pureFunction()
return result
catch(ConcurentWriteException)
continue
throw TooManyRetries
AggregateRepository class
loadAggregate(aggregateClass, aggregateId) method:
aggregate = new aggregateClass
priorEvents = EventStore.loadEvents()
this.applyEventsOnAggregate(aggregate, priorEvents)
saveAggregate(aggregateId, aggregate, newEvents)
this.applyEventsOnAggregate(aggregate, newEvents)
EventStore.saveEventsForAggregate(aggregateId, newEvents, priorEvents.version)
SomeAggregate class
handleCommand1(command1) method:
return new SomeEvent or throw someException BUT don't change state!
applySomeEvent(SomeEvent) method:
changeStateSomehow() and not throw any exception and don't return anything!
Keep in mind that this is pseudo code projected from a PHP application; the real code should have things injected and other responsibilities refactored out in other classes. The ideea is to keep aggregates as clean as possible and avoid code duplication.
Some important aspects about aggregates:
command handlers should not change state; they yield events or
throw exceptions
event applies should not throw any exception and should not return anything; they only change internal state
An open-source PHP implementation of this could be found here.

Optimizing Windows WF 3.5 Instance Creation for 200 request

I currently have a process that creates a windows wf 3.5 instance for each account that a customer have.
foreach (Account acct in Customer.Accounts)
{
Dictionary<string, object> param = new Dictionary<string, object>();
param.Add("account", acct);
//create the workflow instance
WorkflowInstance instance = workflowRuntime.CreateWorkflow(typeof(AcctWorkflow), param);
//start and run the workflow
instance.Start();
scheduler.RunWorkflow(instance.InstanceId);
}
currently the creation of each request is about 500ms, but given 200 accounts, the total time > 1 min.
this is created real time as the user clicks on a create request button.
Please advise if there is any thing else i can do to make it faster.
I'm not sure what can be done for WF3.5 to speed things up. The WF3.5 run-time engine has many inherent "lack of optimization" problems that just can't be fixed or worked around (especially the way looping constructs, such as the While activity, are implemented).
If it is feasible for your project you should really consider re-writing it into WF4. The run-time engine for WF4 is a complete rewrite with a strong consideration for large+fast workflows. See http://msdn.microsoft.com/en-us/library/gg281645.aspx for a speed comparison of the WF3.5 vs WF4 run-time engines.
Even if it is not possible to rewrite your workflow into WF4, you should update to run the 3.5 workflow using the 4 engine (via the WF4 Interop activity). This could still potentially double your speed over using the WF3.5 engine. See the bottom of the page of the link above for the comparison using the Interop activity.

Resources