Flux Store and actions with temporary data - flux

Note: This is a follow-up question of https://stackoverflow.com/questions/32536037/flux-store-collection-by-criteria-vs-single-item but it is independent to understand and answer.
Imagine we have an application for managing (CRUD) Tasks. One operation is a Task editing.
First the edit view loads the Task using an action creator that asynchronously fetches it from the server and dispatches TASK_LOAD_SUCCESS event together with the Task payload. Next a Task Store stores the Task and emits a change event so that the edit view can read it and fill the form.
When the user submits the form the changes should be saved and the edit view should be closed.
On the submit the edit view tells action creator to asynchronously save the Task. On AJAX success the TASK_SAVE_SUCCESS is dispatched (to the Task store).
Q1: What should the Task Store do? Should it update its internal flag that a task has been saved then emit the change event and then the view should read that flag from the store and close itself if it is true?
Q2: Should the Store find the Task in the collection of the previously loaded Tasks and update it there? Other Tasks in the collection will remain stale (see Q2 in https://stackoverflow.com/questions/32536037/flux-store-collection-by-criteria-vs-single-item).
Q3: What if we edit the Task again? The Store still has the flag that the Task has been successfully saved and it closes itself immediately. But it was from the previous save. How to deal with it?
Simmilar problem arises if we want to delete a Task. We use an optimistic locking and therefore we must first read the Task from the server then show the confirmation dialog and finally delete the Task on the server (providing ETag from the first read).
Q4: How to use the store to signal that the Task has been loaded for the deletion? During this AJAX call there might another asynchronous read operation become complete and it would clash with this one. Should there be a separate Store for a Task deletion?
Q5: This is same as Q1. After the deletion how to tell the view that it is done so it can close the confirmation dialog?

Q1-Q3: you may store an edit_timestamp in TaskStore and open_timestamp for confirmation dialog. On emitChange you may compare if edit_timestamp > open_timestamp.
Q4: you may cache request Promise for each taskId on fetch request. So instead doing the same request twice (read/delete fetch for the same taskId), you may subscribe on the existed Promise. That allow you to keep only the single instance of task, and I hope avoid Q5 problems:
//To imagine how to arrange promise-based async interaction you may look here http://mjw56.github.io/handling-asynchronous-data-flow-in-flux/index.html
var promises = {};
//Returns Promise
var asyncGetCall = function(taskId) {...}
var getTaskForDelete, getTaskForRead;
getTaskForDelete = getTaskForRead = function(taskId) {
if (!promises[taskId]) {
promises[taskId] = asyncGetCall(taskId);
}
return promises[taskId];
}
getTaskForDelete(10).then(function() {...}); //do asyncGetCall
getTaskForRead(10).then(function() {...}); // do nothing, wait for the first req results

Related

Multiple NetSuite Script Event Types

I'm new to NetSuite and have been tasked with integrating another system with NetSuite. I've created a User Event script that needs to run against multiple NetSuite events. The deployment interface seems to only let me assign the script to Create OR Edit, but not both. Is this not possible or what am I doing wrong?
Thanks,
You can define the events on which the UE script runs within the script, and leave the event type assignment in the deployment record blank.
Firstly, if you leave the event type blank in the UI and don't include logic withing the script to limit when it runs, it will be triggered on all event types (create, edit etc) whenever the triggering event occurs (beforeLoad, beforeSubmit, afterSubmit).
Selecting the event type in the UI is an easy shortcut to limiting when a script runs without having to worry about additional script logic; however, for maximum flexibility you can use script logic as follows or modify to suit your needs (in SS2.0):
function beforeSubmit(scriptContext) {
log.debug('type', scriptContext.type);
if (scriptContext.type !== scriptContext.UserEventType.CREATE) {
log.error('Exiting script', 'Context type is ' + scriptContext.type);
return;
}
//Do your work here
}

Difference between dispatch and emit in Flux/React Native

I'm new in Flux/React Native.
I'm quite confused about dispatch vs emit using in Flux.
What is the main difference between them? And what happen when I use same Action Type in dispatch and emit.
For example:
Dispatcher.dispatch({
actionType: 'ACTION1'
});
SomeStore.emit('ACTION1');
In Flux, events are emitted by the store indicating a change in its state. This 'change' event is listened to by views. This will prompt a view to fetch new state from the store. Mind you, the event never contains payload / information about the new state. It is really just what it reads - an event.
Actions are slightly different. While they are indeed events, they are things that occur in our domain eg., Add item to cart. And they carry a payload that contains information about the action, eg.,
{
id: ‘add-item-to-cart’,
payload: {
cartId: 123,
itemId: 1234,
name: ‘Box of chocolates’,
quantity: 1
}
}
Actions are 'dispatched' from the views and the store(s) responds to the dispatch by possibly changing its state and emitting a 'change' event.
So basically:
A view dispatches an action with a payload (usually due to a user interaction) via the dispatcher
The store (which had previously registered itself with the dispatcher)
is notified of the action and uses the payload to change its state and emit an event.
The view (which had previously registered itself with the store) is notified of the change event which causes it to get the new state from the store and change itself.
So that's the difference. And about the question "use same Action Type in dispatch and emit", it doesn't really make sense, does it?
I suggest you read this blog post - http://blog.andrewray.me/flux-for-stupid-people/ (The title means no offence BTW :))
You already know this, but I'll say it again: A unidirectional data flow is central to the Flux pattern. That means data (not control) always flows in one direction.

NSFetchedResultsController inserts lock up the UI

I am building a chat application using web-sockets and core-data.
Basically, whenever a message is received on the web-socket, the following happens:
check if the message exists by performing a core-data fetch using the id (indexed)
if 1. returns yes, update the message and perform a core-data save. if 1. returns no, create the message and perform a core-data save.
update table view, by updating or inserting new rows.
Here's my setup:
I have 2 default managed-object-contexts. MAIN (NSMainQueueConcurrencyType) and WRITER (NSPrivateQueueConcurrencyType). WRITER has a reference to the persisten store coordinator, MAIN does not, but WRITER is set as MAIN's parent.
TableView is connected to a NSResultsFetchController, connected to MAIN.
Fetches are all performed using temporary contexts ("performBlock:") that have MAIN as their parent. Writes look like this: Save temporary context, then save MAIN, then save WRITER.
Problem:
Because the updates come in via web-socket, in a busy chat-room, many updates happen in a short time. Syncs to fetch older messages can mean many messages coming in rapidly. And this locks up the UI.
I track the changes to the ui using fetched-results-controller's delegate like this:
// called on main thread
- (void)controllerWillChangeContent:(NSFetchedResultsController *)controller
{
NSLog(#"WILL CHANGE CONTENT");
[_tableView beginUpdates];
}
// called on main thread
- (void)controllerDidChangeContent:(NSFetchedResultsController *)controller
{
NSLog(#"DID CHANGE CONTENT");
[_tableView endUpdates];
}
and here's an example of what I see in the Log-file:
2014-07-14 18:46:20.630 AppName[4938:60b] DID CHANGE CONTENT
2014-07-14 18:46:22.334 AppName[4938:60b] WILL CHANGE CONTENT
That's almost 2 seconds per insert!
Is it simply a limitation I'm hitting here with tableviews? (I'm talking about 1000+ rows in some cases) But I can't imagine that's the case. UITableViews are super-optimized for that sort of operation.
Any obvious newbie-mistake I might be commiting?
This is not logical:
Writes look like this: Save temporary context, then save MAIN, then save WRITER.
If writer is a child context of main the changes are not persisted until the next save.
Ok I figured it out.
The problem is tableView:heightForRowAtIndexPath:. The fetches to calculate the height for the rows take time and each time tableView.endUpdates gets called, the UITableView needs the heights for ALL rows.
tableView:estimatedHeightForRowAtIndexPath: is a possible way to go (iOS7+), or I might opt for caching the heights myself (since the rows don't change) or just displaying fewer rows altog

Relation between command handlers, aggregates, the repository and the event store in CQRS

I'd like to understand some details of the relations between command handlers, aggregates, the repository and the event store in CQRS-based systems.
What I've understood so far:
Command handlers receive commands from the bus. They are responsible for loading the appropriate aggregate from the repository and call the domain logic on the aggregate. Once finished, they remove the command from the bus.
An aggregate provides behavior and an internal state. State is never public. The only way to change state is by using the behavior. The methods that model this behavior create events from the command's properties, and apply these events to the aggregate, which in turn call an event handlers that sets the internal state accordingly.
The repository simply allows loading aggregates on a given ID, and adding new aggregates. Basically, the repository connects the domain to the event store.
The event store, last but not least, is responsible for storing events to a database (or whatever storage is used), and reloading these events as a so-called event stream.
So far, so good.
Now there are some issues that I did not yet get:
If a command handler is to call behavior on a yet existing aggregate, everything is quite easy. The command handler gets a reference to the repository, calls its loadById method and the aggregate is returned. But what does the command handler do when there is no aggregate yet, but one should be created? From my understanding the aggregate should later-on be rebuilt using the events. This means that creation of the aggregate is done in reply to a fooCreated event. But to be able to store any event (including the fooCreated one), I need an aggregate. So this looks to me like a chicken-and-egg problem: I can not create the aggregate without the event, but the only component that should create events is the aggregate. So basically it comes down to: How do I create new aggregates, who does what?
When an aggregate triggers an event, an internal event handler responses to it (typically by being called via an apply method) and changes the aggregate's state. How is this event handed over to the repository? Who originates the "please send the new events to the repository / event store" action? The aggregate itself? The repository by watching the aggregate? Someone else who is subscribed to the internal events? ...?
Last but not least I have a problem understanding the concept of an event stream correctly: In my imagination, it's simply something like an ordered list of events. What's of importance is that it's "ordered". Is this right?
The following is based on my own experience and my experiments with various frameworks like Lokad.CQRS, NCQRS, etc. I'm sure there are multiple ways to handle this. I'll post what makes most sense to me.
1. Aggregate Creation:
Every time a command handler needs an aggregate, it uses a repository. The repository retrieves the respective list of events from the event store and calls an overloaded constructor, injecting the events
var stream = eventStore.LoadStream(id)
var User = new User(stream)
If the aggregate didn't exist before, the stream will be empty and the newly created object will be in it's original state. You might want to make sure that in this state only a few commands are allowed to bring the aggregate to life, e.g. User.Create().
2. Storage of new Events
Command handling happens inside a Unit of Work. During command execution every resulting event will be added to a list inside the aggregate (User.Changes). Once execution is finished, the changes will be appended to the event store. In the example below this happens in the following line:
store.AppendToStream(cmd.UserId, stream.Version, user.Changes)
3. Order of Events
Just imagine what would happen, if two subsequent CustomerMoved events are replayed in the wrong order.
An Example
I'll try to illustrate the with a piece of pseudo-code (I deliberately left repository concerns inside the command handler to show what would happen behind the scenes):
Application Service:
UserCommandHandler
Handle(CreateUser cmd)
stream = store.LoadStream(cmd.UserId)
user = new User(stream.Events)
user.Create(cmd.UserName, ...)
store.AppendToStream(cmd.UserId, stream.Version, user.Changes)
Handle(BlockUser cmd)
stream = store.LoadStream(cmd.UserId)
user = new User(stream.Events)
user.Block(string reason)
store.AppendToStream(cmd.UserId, stream.Version, user.Changes)
Aggregate:
User
created = false
blocked = false
Changes = new List<Event>
ctor(eventStream)
isNewEvent = false
foreach (event in eventStream)
this.Apply(event, isNewEvent)
Create(userName, ...)
if (this.created) throw "User already exists"
isNewEvent = true
this.Apply(new UserCreated(...), isNewEvent)
Block(reason)
if (!this.created) throw "No such user"
if (this.blocked) throw "User is already blocked"
isNewEvent = true
this.Apply(new UserBlocked(...), isNewEvent)
Apply(userCreatedEvent, isNewEvent)
this.created = true
if (isNewEvent) this.Changes.Add(userCreatedEvent)
Apply(userBlockedEvent, isNewEvent)
this.blocked = true
if (isNewEvent) this.Changes.Add(userBlockedEvent)
Update:
As a side note: Yves' answer reminded me of an interesting article by Udi Dahan from a couple of years ago:
Don’t Create Aggregate Roots
A small variation on Dennis excellent answer:
When dealing with "creational" use cases (i.e. that should spin off new aggregates), try to find another aggregate or factory you can move that responsibility to. This does not conflict with having a ctor that takes events to hydrate (or any other mechanism to rehydrate for that matter). Sometimes the factory is just a static method (good for "context"/"intent" capturing), sometimes it's an instance method of another aggregate (good place for "data" inheritance), sometimes it's an explicit factory object (good place for "complex" creation logic).
I like to provide an explicit GetChanges() method on my aggregate that returns the internal list as an array. If my aggregate is to stay in memory beyond one execution, I also add an AcceptChanges() method to indicate the internal list should be cleared (typically called after things were flushed to the event store). You can use either a pull (GetChanges/Changes) or push (think .net event or IObservable) based model here. Much depends on the transactional semantics, tech, needs, etc ...
Your eventstream is a linked list. Each revision (event/changeset) pointing to the previous one (a.k.a. the parent). Your eventstream is a sequence of events/changes that happened to a specific aggregate. The order is only to be guaranteed within the aggregate boundary.
I almost agree with yves-reynhout and dennis-traub but I want to show you how I do this. I want to strip my aggregates of the responsibility to apply the events on themselves or to re-hydrate themselves; otherwise there is a lot of code duplication: every aggregate constructor will look the same:
UserAggregate:
ctor(eventStream)
foreach (event in eventStream)
this.Apply(event)
OrderAggregate:
ctor(eventStream)
foreach (event in eventStream)
this.Apply(event)
ProfileAggregate:
ctor(eventStream)
foreach (event in eventStream)
this.Apply(event)
Those responsibilities could be left to the command dispatcher. The command is handled directly by the aggregate.
Command dispatcher class
dispatchCommand(command) method:
newEvents = ConcurentProofFunctionCaller.executeFunctionUntilSucceeds(tryToDispatchCommand)
EventDispatcher.dispatchEvents(newEvents)
tryToDispatchCommand(command) method:
aggregateClass = CommandSubscriber.getAggregateClassForCommand(command)
aggregate = AggregateRepository.loadAggregate(aggregateClass, command.getAggregateId())
newEvents = CommandApplier.applyCommandOnAggregate(aggregate, command)
AggregateRepository.saveAggregate(command.getAggregateId(), aggregate, newEvents)
ConcurentProofFunctionCaller class
executeFunctionUntilSucceeds(pureFunction) method:
do this n times
try
call result=pureFunction()
return result
catch(ConcurentWriteException)
continue
throw TooManyRetries
AggregateRepository class
loadAggregate(aggregateClass, aggregateId) method:
aggregate = new aggregateClass
priorEvents = EventStore.loadEvents()
this.applyEventsOnAggregate(aggregate, priorEvents)
saveAggregate(aggregateId, aggregate, newEvents)
this.applyEventsOnAggregate(aggregate, newEvents)
EventStore.saveEventsForAggregate(aggregateId, newEvents, priorEvents.version)
SomeAggregate class
handleCommand1(command1) method:
return new SomeEvent or throw someException BUT don't change state!
applySomeEvent(SomeEvent) method:
changeStateSomehow() and not throw any exception and don't return anything!
Keep in mind that this is pseudo code projected from a PHP application; the real code should have things injected and other responsibilities refactored out in other classes. The ideea is to keep aggregates as clean as possible and avoid code duplication.
Some important aspects about aggregates:
command handlers should not change state; they yield events or
throw exceptions
event applies should not throw any exception and should not return anything; they only change internal state
An open-source PHP implementation of this could be found here.

How to know when a web page is loaded when using QtWebKit?

Both QWebFrame and QWebPage have void loadFinished(bool ok) signal which can be used to detect when a web page is completely loaded. The problem is when a web page has some content loaded asynchronously (ajax). How to know when the page is completely loaded in this case?
I haven't actually done this, but I think you may be able to achieve your solution using QNetworkAccessManager.
You can get the QNetworkAccessManager from your QWebPage using the networkAccessManager() function. QNetworkAccessManager has a signal finished ( QNetworkReply * reply ) which is fired whenever a file is requested by the QWebPage instance.
The finished signal gives you a QNetworkReply instance, from which you can get a copy of the original request made, in order to identify the request.
So, create a slot to attach to the finished signal, use the passed-in QNetworkReply's methods to figure out which file has just finished downloading and if it's your Ajax request, do whatever processing you need to do.
My only caveat is that I've never done this before, so I'm not 100% sure that it would work.
Another alternative might be to use QWebFrame's methods to insert objects into the page's object model and also insert some JavaScript which then notifies your object when the Ajax request is complete. This is a slightly hackier way of doing it, but should definitely work.
EDIT:
The second option seems better to me. The workflow is as follows:
Attach a slot to the QWebFrame::javascriptWindowObjectCleared() signal. At this point, call QWebFrame::evaluateJavascript() to add code similar to the following:
window.onload = function() { // page has fully loaded }
Put whatever code you need in that function. You might want to add a QObject to the page via QWebFrame::addToJavaScriptWindowObject() and then call a function on that object. This code will only execute when the page is fully loaded.
Hopefully this answers the question!
To check the load of specific element you can use a QTimer. Something like this in python:
#pyqtSlot()
def on_webView_loadFinished(self):
self.tObject = QTimer()
self.tObject.setInterval(1000)
self.tObject.setSingleShot(True)
self.tObject.timeout.connect(self.on_tObject_timeout)
self.tObject.start()
#pyqtSlot()
def on_tObject_timeout(self):
dElement = self.webView.page().currentFrame().documentElement()
element = dElement.findFirst("css selector")
if element.isNull():
self.tObject.start()
else:
print "Page loaded"
When your initial html/images/etc finishes loading, that's it. It is completely loaded. This fact doesn't change if you then decide to use some javascript to get some extra data, page views or whatever after the fact.
That said, what I suspect you want to do here is expose a QtScript object/interface to your view that you can invoke from your page's script, effectively providing a "callback" into your C++ once you've decided (from the page script) that you've have "completely loaded".
Hope this helps give you a direction to try...
The OP thought it was due to delayed AJAX requests but there also could be another reason that also explains why a very short time delay fixes the problem. There is a bug that causes the described behaviour:
https://bugreports.qt-project.org/browse/QTBUG-37377
To work around this problem the loadingFinished() signal must be connected using queued connection.

Resources