Platform
Dynamics CRM 2016 online
C#, plugin
A lead is created, and a plugin fires on create , which tries to find any Lead with the same email and merges this new Lead into the old one.
So the plugin fires on creation of Lead B (Post operation)
Finds a Lead with the same email(assume Lead A), it merges Lead B into Lead A.
Problem
Lead B is never saved, no error is thrown but the save indicator keeps rotating.
And nothing else happens, i.e. no merging etc
Debugging
Changed the logic a bit to run the plugin on Update (Create step was disabled).
It worked perfectly, exactly same code.
Question
Is this step i.e. merging not possible from inside a Create plugin of the record being merged.
Code
MergeRequest merge = new MergeRequest();
merge.SubordinateId = targetEntity.Id;
merge.Target = new EntityReference(primaryLead.LogicalName, primaryLead.Id);
merge.PerformParentingChecks = false;
merge.UpdateContent = updateContent;
MergeResponse merged = (MergeResponse)svc.Execute(merge);
targetentity is the record for which the plugin is fired and primary lead is the result of a fetch query
updateContent is some data to be copied into the merged lead.
Change plugin to Async and that should work fine.
Related
I am currently making a turn based strategy game with laravel (mysql DB with InnoDB) engine and want to make sure that I don't have bugs due to race conditions, duplicate requests, bad actors etc...
Because these kind of bugs are hard to test, I wanted to get some clarification.
Many actions in the game can only occur once per turn, like buying a new unit. Here is a simplified bit of code for purchasing a unit.
$player = Player::find($player_id);
if($player->gold >= $unit_price && $player->has_purchased == false){
$player->has_purchased = true;
$player->gold -= $unit_price;
$player->save();
$unit = new Unit();
$unit->player_id = $player->id;
$unit->save();
}
So my concern would be if two threads both made it pass the if statement and then executed the block of code at the same time.
Is this a valid concern?
And would the solution be to wrap everything in a database transaction like https://betterprogramming.pub/using-database-transactions-in-laravel-8b62cd2f06a5 ?
This means that a good portion of my code will be wrapped around database transactions because I have a lot of instances that are variations of the above code for different actions.
Also there is a situation where multiple users will be able to update a value in the database so I want to avoid a situation where 2 users increment the value at the same time and it only gets incremented once.
Since you are using Laravel to presumably develop a web-based game, you can expect multiple concurrent connections to occur. A transaction is just one part of the equation. Transactions ensure operations are performed atomically, in your case it ensures that both the player and unit save are successful or both fail together, so you won't have the situation where the money is deducted but the unit is not granted.
However there is another facet to this, if there is a real possibility you have two separate requests for the same player coming in concurrently then you may also encounter a race condition. This is because a transaction is not a lock so two transactions can happen at the same time. The implication of this is (in your case) two checks happen on the same player instance to ensure enough gold is available, both succeed, and both deduct the same gold, however two distinct units are granted at the end (i.e. item duplication). To avoid this you'd use a lock to prevent other threads from obtaining the same player row/model, so your full code would be:
DB::transaction(function () use ($unit_price) {
$player = Player::where('id',$player_id)->lockForUpdate()->first();
if($player->gold >= $unit_price && $player->has_purchased == false){
$player->has_purchased = true;
$player->gold -= $unit_price;
$player->save();
$unit = new Unit();
$unit->player_id = $player->id;
$unit->save();
}
});
This will ensure any other threads trying to retrieve the same player will need to wait until the lock is released (which will happen at the end of the first request).
There's more nuances to deal with here as well like a player sending a duplicate request from double-clicking for example, and that can get a bit more complex.
For you purchase system, it's advisable to implement DB:transaction since it protects you from false records. Checkout the laravel docs for more information on this https://laravel.com/docs/9.x/database#database-transactions As for reactive data you need to keep track of, simply bind a variable to that data in your frontEnd, then use the variable to update your DB records.
In the case you need to exit if any exception or error occurs. If an exception is thrown the data will not save and rollback all the transactions. I recommand to use transactions as possible as you can. The basic format is:
DB::beginTransaction();
try {
// database actions like create, update etc.
DB::commit(); // finally commit to database
} catch (\Exception $e) {
DB::rollback(); // roll back if any error occurs
// something went wrong
}
See the laravel docs here
I'm struggling for 4 days now.
There's this C# Process Engine API:
https://www.ibm.com/support/knowledgecenter/en/SSNW2F_5.2.1/com.ibm.p8.pe.dev.doc/web_services/ws_reference.htm
What I need to do is to retrieve the WorkflowNumber when launching the workflow, so later I can find that process in the system.
The issue here is that when you launch it - it returns the LaunchStep (the first step in the workflow) which doesn't have that ID assigned yet - it's null. The only thing available is the LaunchStep's WOBNumber.
In order to assign the Workflow ID to the step, you need to dispatch the step, so I do that:
UpdateStepRequest request = new UpdateStepRequest();
UpdateFlagEnum flag = UpdateFlagEnum.UPDATE_DISPATCH;
request.updateFlag = flag;
request.stepElement = element; // add the launch step
session.Client.updateStep(request);
And here the funny part happens. From this point, there is complately no option to retrieve that, because StepElements are stateless, updateStep() returns nothing and the best part - the LaunchStep is now destroyed in the system, because it's a LaunchStep - it just gets destroyed after the launch.
Any tips would be appreciated!
We are using Realm in a Xamarin app and have some issues refreshing the local database based on a remote source. Data is fetched from a remote endpoint and stored locally using Realm for easier/faster access.
Program flow is as follows:
Fetch data from remote source (if possible).
Loop through the entities returned by the remote source while keeping track of the IDs we've seen so far. New or updated entities are written to Realm.
Loop through the set of locally stored entities, removing entities we haven't seen in step 2 with Realm.Remove(entity); (in a transaction)
Return Realm.All<Entity>();
Unfortunately, the entities are returned by step 4 before all "remove" operations have been written. As a result, it takes a couple of refreshes before the local database is completely in sync.
The remove operation is done as follows:
foreach (Entity entity in realm.All<Entity>())
{
if (seenIds.Contains(entity.Id))
{
continue;
}
realm.Write(() => {
realm.Remove(entity);
});
}
Is there a way to have Realm wait till the transaction is completed, before returning the Realm.All<Entity>();?
I am pretty sure this is not particularly a Realm issue - the same pattern would cause problems with a lot of enumerable, mutable containers. You are removing items from a list whilst iterating it so enumeration is moving on too far.
There is no buffering on Realm transactions so I guarantee it is not about have Realm wait till the transaction is completed but is your list logic.
There are two basic ways to do this differently:
Use ToList to get a list of all objects from the All - this is expensive if many objects because you will instantiate all the objects.
Instead of removing objects inside the loop, add them to a list of items to be removed then iterate that list.
Note that using a transaction per-remove, as you are doing with Write here is relatively slow. You can do many operations in one transaction.
We are also working on other improvements to the Realm API that might give a more efficient way of handling this. It would be very helpful to know the relative data sizes - the number of removals vs records in the loop. We love getting sample data and schemas (can send privately to help#realm.io).
an example of option 2:
var toDelete = new List<Entity>();
foreach (Entity entity in realm.All<Entity>())
{
if (!seenIds.Contains(entity.Id))
toDelete.Add(entity);
}
realm.Write(() => {
foreach (Entity entity in toDelete))
realm.Remove(entity);
});
I'm working with Oracle BPMN (Fusion middleware), using JDeveloper to create BPMN processes, and writing Java code for a custom page to display the flow diagram for running processes. The problem being encountered is that the BPMN diagrams do not display/update until they hit certain trigger events (apparently asynchronous event points). So in many cases the diagrams do not even show up in a query until the BPMN process completes. Note we don't normally have user input tasks, which qualify as async events and also result in the diagram then showing up.
Our team has talked to Oracle about it and their solution was to wrap every BPMN call (mostly service calls) in asynchronous BPEL wrappers, so that the BPMN calls an async request/response (thus as two actions) that calls the service. Doing this does work, but it adds a huge overhead to the effort of developing BPMN processes, because every action has to be wrapped.
So I'm wondering if anyone else has explored or potentially solved this problem.
Some code snippets of what we're doing (partial code only):
To get the running instance IDs:
List<Column> columns = new ArrayList<Column>();
columns.add(...); // repeated for all relevant fields
Ordering ...
Predicate ...
IInstanceQueryInput input = new IInstanceQueryInput();
List<IProcessInstance> instances = client.getInstanceQueryService().queryProcessInstances(context, columns, predicate, ordering, input);
// however, instances doesn't return the instance until the first async event, or until completion
After that the AuditProcessDiagrammer is used to get the flow diagram, and DiagramEvents uesd to update / highlight the flow in progress. The instanceId does show up in the Oracle fusion control panel, so it must at least potentially be available. But trying to get an image for it results in a null image:
IProcessInstance pi = client.getInstanceQueryService().getProcessInstance(context, instance);
// HERE --> pi is null until the image is available (so the rest of this isn't run)
String compositeDn = pi.getSca().getCompositeDN();
String componentName = pi.getSca().getComponentName();
IProcessModelPackage package = client.getProcessModelService().getProcessModel(context, compositeDn, componentName);
ProcessDiagramInfo info = new ProcessDiagramInfo();
info.setModelPackage(package);
AuditProcessDiagrammer dg = new AuditProcessDiagrammer(info.getModelPackage().getProcessModel().getProcess());
List<IAuditInstance> audits = client.getInstanceQueryService().queryAuditInstanceByProcessId(context, instance);
List<IDiagramEvent> events = // function to get these
dg.highlight(events);
String base64image = dg.getImage();
See the HERE --> part. That's where I need instance to be valid.
If there are good alternatives (setting, config, etc...) that others have successfully used, I'd love to hear it. I'm really not interested in strange workarounds (already have that in the BPEL wrapper). I'm looking for a solution that allows the BPMN process flow to remain simple. Thanks.
I'd like to understand some details of the relations between command handlers, aggregates, the repository and the event store in CQRS-based systems.
What I've understood so far:
Command handlers receive commands from the bus. They are responsible for loading the appropriate aggregate from the repository and call the domain logic on the aggregate. Once finished, they remove the command from the bus.
An aggregate provides behavior and an internal state. State is never public. The only way to change state is by using the behavior. The methods that model this behavior create events from the command's properties, and apply these events to the aggregate, which in turn call an event handlers that sets the internal state accordingly.
The repository simply allows loading aggregates on a given ID, and adding new aggregates. Basically, the repository connects the domain to the event store.
The event store, last but not least, is responsible for storing events to a database (or whatever storage is used), and reloading these events as a so-called event stream.
So far, so good.
Now there are some issues that I did not yet get:
If a command handler is to call behavior on a yet existing aggregate, everything is quite easy. The command handler gets a reference to the repository, calls its loadById method and the aggregate is returned. But what does the command handler do when there is no aggregate yet, but one should be created? From my understanding the aggregate should later-on be rebuilt using the events. This means that creation of the aggregate is done in reply to a fooCreated event. But to be able to store any event (including the fooCreated one), I need an aggregate. So this looks to me like a chicken-and-egg problem: I can not create the aggregate without the event, but the only component that should create events is the aggregate. So basically it comes down to: How do I create new aggregates, who does what?
When an aggregate triggers an event, an internal event handler responses to it (typically by being called via an apply method) and changes the aggregate's state. How is this event handed over to the repository? Who originates the "please send the new events to the repository / event store" action? The aggregate itself? The repository by watching the aggregate? Someone else who is subscribed to the internal events? ...?
Last but not least I have a problem understanding the concept of an event stream correctly: In my imagination, it's simply something like an ordered list of events. What's of importance is that it's "ordered". Is this right?
The following is based on my own experience and my experiments with various frameworks like Lokad.CQRS, NCQRS, etc. I'm sure there are multiple ways to handle this. I'll post what makes most sense to me.
1. Aggregate Creation:
Every time a command handler needs an aggregate, it uses a repository. The repository retrieves the respective list of events from the event store and calls an overloaded constructor, injecting the events
var stream = eventStore.LoadStream(id)
var User = new User(stream)
If the aggregate didn't exist before, the stream will be empty and the newly created object will be in it's original state. You might want to make sure that in this state only a few commands are allowed to bring the aggregate to life, e.g. User.Create().
2. Storage of new Events
Command handling happens inside a Unit of Work. During command execution every resulting event will be added to a list inside the aggregate (User.Changes). Once execution is finished, the changes will be appended to the event store. In the example below this happens in the following line:
store.AppendToStream(cmd.UserId, stream.Version, user.Changes)
3. Order of Events
Just imagine what would happen, if two subsequent CustomerMoved events are replayed in the wrong order.
An Example
I'll try to illustrate the with a piece of pseudo-code (I deliberately left repository concerns inside the command handler to show what would happen behind the scenes):
Application Service:
UserCommandHandler
Handle(CreateUser cmd)
stream = store.LoadStream(cmd.UserId)
user = new User(stream.Events)
user.Create(cmd.UserName, ...)
store.AppendToStream(cmd.UserId, stream.Version, user.Changes)
Handle(BlockUser cmd)
stream = store.LoadStream(cmd.UserId)
user = new User(stream.Events)
user.Block(string reason)
store.AppendToStream(cmd.UserId, stream.Version, user.Changes)
Aggregate:
User
created = false
blocked = false
Changes = new List<Event>
ctor(eventStream)
isNewEvent = false
foreach (event in eventStream)
this.Apply(event, isNewEvent)
Create(userName, ...)
if (this.created) throw "User already exists"
isNewEvent = true
this.Apply(new UserCreated(...), isNewEvent)
Block(reason)
if (!this.created) throw "No such user"
if (this.blocked) throw "User is already blocked"
isNewEvent = true
this.Apply(new UserBlocked(...), isNewEvent)
Apply(userCreatedEvent, isNewEvent)
this.created = true
if (isNewEvent) this.Changes.Add(userCreatedEvent)
Apply(userBlockedEvent, isNewEvent)
this.blocked = true
if (isNewEvent) this.Changes.Add(userBlockedEvent)
Update:
As a side note: Yves' answer reminded me of an interesting article by Udi Dahan from a couple of years ago:
Don’t Create Aggregate Roots
A small variation on Dennis excellent answer:
When dealing with "creational" use cases (i.e. that should spin off new aggregates), try to find another aggregate or factory you can move that responsibility to. This does not conflict with having a ctor that takes events to hydrate (or any other mechanism to rehydrate for that matter). Sometimes the factory is just a static method (good for "context"/"intent" capturing), sometimes it's an instance method of another aggregate (good place for "data" inheritance), sometimes it's an explicit factory object (good place for "complex" creation logic).
I like to provide an explicit GetChanges() method on my aggregate that returns the internal list as an array. If my aggregate is to stay in memory beyond one execution, I also add an AcceptChanges() method to indicate the internal list should be cleared (typically called after things were flushed to the event store). You can use either a pull (GetChanges/Changes) or push (think .net event or IObservable) based model here. Much depends on the transactional semantics, tech, needs, etc ...
Your eventstream is a linked list. Each revision (event/changeset) pointing to the previous one (a.k.a. the parent). Your eventstream is a sequence of events/changes that happened to a specific aggregate. The order is only to be guaranteed within the aggregate boundary.
I almost agree with yves-reynhout and dennis-traub but I want to show you how I do this. I want to strip my aggregates of the responsibility to apply the events on themselves or to re-hydrate themselves; otherwise there is a lot of code duplication: every aggregate constructor will look the same:
UserAggregate:
ctor(eventStream)
foreach (event in eventStream)
this.Apply(event)
OrderAggregate:
ctor(eventStream)
foreach (event in eventStream)
this.Apply(event)
ProfileAggregate:
ctor(eventStream)
foreach (event in eventStream)
this.Apply(event)
Those responsibilities could be left to the command dispatcher. The command is handled directly by the aggregate.
Command dispatcher class
dispatchCommand(command) method:
newEvents = ConcurentProofFunctionCaller.executeFunctionUntilSucceeds(tryToDispatchCommand)
EventDispatcher.dispatchEvents(newEvents)
tryToDispatchCommand(command) method:
aggregateClass = CommandSubscriber.getAggregateClassForCommand(command)
aggregate = AggregateRepository.loadAggregate(aggregateClass, command.getAggregateId())
newEvents = CommandApplier.applyCommandOnAggregate(aggregate, command)
AggregateRepository.saveAggregate(command.getAggregateId(), aggregate, newEvents)
ConcurentProofFunctionCaller class
executeFunctionUntilSucceeds(pureFunction) method:
do this n times
try
call result=pureFunction()
return result
catch(ConcurentWriteException)
continue
throw TooManyRetries
AggregateRepository class
loadAggregate(aggregateClass, aggregateId) method:
aggregate = new aggregateClass
priorEvents = EventStore.loadEvents()
this.applyEventsOnAggregate(aggregate, priorEvents)
saveAggregate(aggregateId, aggregate, newEvents)
this.applyEventsOnAggregate(aggregate, newEvents)
EventStore.saveEventsForAggregate(aggregateId, newEvents, priorEvents.version)
SomeAggregate class
handleCommand1(command1) method:
return new SomeEvent or throw someException BUT don't change state!
applySomeEvent(SomeEvent) method:
changeStateSomehow() and not throw any exception and don't return anything!
Keep in mind that this is pseudo code projected from a PHP application; the real code should have things injected and other responsibilities refactored out in other classes. The ideea is to keep aggregates as clean as possible and avoid code duplication.
Some important aspects about aggregates:
command handlers should not change state; they yield events or
throw exceptions
event applies should not throw any exception and should not return anything; they only change internal state
An open-source PHP implementation of this could be found here.