I'm trying to subscribe on upcoming events using function EvtSubscribe:
hsubscription = microsoft_EvtSubscribe(
NULL, // session
NULL, // signal
NULL, // channel path
L"<QueryList><Query Id="0" Path='Application'><Select>*[System[EventRecordID >= 1037374]]</Select></Query></QueryList>" // query
NULL, // bookmark
context, // context
clbk, // callback
EvtSubscribeToFutureEvents // flags
);
But the callback has never been called back.
I tried several approaches using XPath + channel path: Event/System[EventRecordID=1037374], *[System/EventRecordID=1037374], but it doesn't work neither.
BTW, whenever the subscription is created for any other field (like: *[System/Computer="windows-build"]) everything works perfectly fine.
What am I doing wrong?
Thanks
Related
live example
I've an Array of Filters as an Observable and I'd like to add/remove filters from it. Here is the code I have that is currently only adding a Filter the first time the function runs.
The second time nothing happens.
private _filters$ = new BehaviorSubject<Filter[]>([]);
addFilter(added: Filter) {
debugger
// adding to array of filters
this._filters$.pipe(
tap(d => { debugger; }),
first(),
map(filters => ([...filters, added]))
).subscribe(this._filters$);
}
So my question is: why does this happen ? Why does it run only once ? (By the way first() is not the reason).
I know I can make the code work like so:
private _filters$ = new BehaviorSubject<Filter[]>([]);
currentFilters;
init() {
this._filters$.subscribe(f => this.currentFilters = f);
}
addFilter(added: Filter) {
this._filters$.next([...this.currentFilters, added]);
}
Actually, it is because of first. When you run the function the first time it is creating the stream and subscribing to the BehaviorSubject. When it receives the first event it forwards it to BehaviorSubject and then it completes BehaviorSubject. The second time you run it BehaviorSubject is already shutdown so it immediately unsubscribes any new subscriptions to it.
Without knowing too much about your actual goal my suggestion is that instead of putting the BehaviorSubject at the bottom of the pipeline you instead put it at the top.
// You don't actually need the caching behavior yet so just use a `Subject`
private _filters$ = new Subject<Filter>()
// Hook this up to whatever is going to be using these filters
private _pipeline$ = this._filters.pipe(
// Use scan instead mapping back into self
scan((filters, newFilter) => ([...filters, newFilter]), []),
// Store the latest value for new subscribers
shareReplay(1)
);
// Now this method is just pushing into the `Subject` and the pipeline never has to be torn down
addFilter(added: Filter) {
debugger
this._filters$.next(added);
}
I have the following requirement.
I have An Angular service with an BehaviorSubject.
A http request is done and when this is done the BehaviorSubject.next method is invoked with the value.
This value can change during the lifecycle of the single page.
Different subscribers are registered to it and get invoked whenever this changes.
The problem is that while the http request is pending the BehaviorSubject already contains a default value and subscribers are already immediately getting this value.
What I would want is that subscribers have to wait till the http request is done (deferred) and get the value when the http request is done and sets the value.
So what I need is some kind of deferred Behavior subject mechanism.
How would i implement this using rxjs?
Another requirement is that if I subscribe to the behaviorsubject in a method we want the subcriber to get the first non default value and that the subscription ends. We don't want local subscriptions in functions to be re-executed.
Use a filter on your behavior subject so your subscribers won't get the first default emitted value:
mySubject$: BehaviorSubject<any> = new BehaviorSubject<any>(null);
httpResponse$: Observable<any> = this.mySubject$.pipe(
filter(response => response)
map(response => {
const modifyResponse = response;
// modify response
return modifyResponse;
}),
take(1)
);
this.httpResponse$.subscribe(response => console.log(response));
this.myHttpCall().subscribe(response => this.mySubject$.next(response));
You can of course wrap the httpResponse$ observable in a method if you need to.
I think the fact that you want to defer the emitted default value, straight away brings into question why you want to use a BehaviorSubject. Let's remember: the primary reason to use a BehaviorSubject (instead of a Subject, or a plain Observable), is to emit a value immediately to any subscriber.
If you need an Observable type where you need control of the producer (via .next([value])) and/or you want multicasting of subscription out of the box, then Subject is appropriate.
If an additional requirement on top of this is that subscribers need a value immediately then you need to consider BehaviorSubject.
If you didn't say you need to update the value from other non-http events/sources, then I would've suggested using a shareReplay(1) pattern. Nevertheless...
private cookieData$: Subject<RelevantDataType> = new
Subject<RelevantDataType>(null);
// Function for triggering http request to update
// internal Subject.
// Consumers of the Subject can potentially invoke this
// themselves if they receive 'null' or no value on subscribe to subject
public loadCookieData(): Observable<RelevantDataType> {
this.http.get('http://someurl.com/api/endpoint')
.map(mapDataToRelevantDataType());
}
// Function for dealing with updating the service's
// internal cookieData$ Subject from external
// consumer which need to update this value
// via non-http events
public setCookieData(data: any): void {
const newCookieValue = this.mapToRelevantDataType(data); // <-- If necessary
this.cookieData$.next(newCookieValue); // <-- updates val for all subscribers
}
get cookieData(): Observable<RelevantDataType> {
return this.cookieData$.asObservable();
}
The solution is based on OPs comments etc.
- deals with subscribing to subject type.
- deals with external subscribers not being able to 'next' a new value directly
- deals with external producers being able to set a new value on the Subject type
- deals with not giving a default value whilst http request is pending
I have a firstRun dialog defined in the bot like this :
// First run dialog
bot.dialog('firstRun', [
function (session, next) {
session.userData.token = _.get(session, 'message.user.token', null) || _.get(session, 'userData.token', null)
}
]).triggerAction({
onFindAction: function (context, callback) {
var score = 0;
if (session.userData.token doesn't exist or new token recieved in `session.user.message.token`){
score = 1.1;
}
callback(null, score);
}
});
And there's a LUIS model integrated with a dialog that triggers on an intent, let's say Help :
bot.dialog('help', [
(session, args) => {
let entities = _.get(args, 'intent.entities', null);
let topic = _.get(builder.EntityRecognizer.findEntity(entities, 'topic'), 'entity', null) || _.get(args, 'option', null);
session.replaceDialog(topic);
}
])
.triggerAction({
matches: 'Help'
});
The onFindAction triggers on every message. And it triggers firstRun only on the first message when session.userData.token is not set.
Problem is, if the first message is matched to Help intent, it does not get triggered. It works from the second time, when firstRun is not triggered.
How can I ensure any matching intent triggers the corresponding dialog, irrespective of firstRun?
If there's a different way possible to achieve the same thing, please suggest.
Addition
What I am trying to accomplish is this - I have a web service auth token that I want to keep in session.userData.token that refreshes every hour. So right now I trigger onFindAction on every utterance which checks if either session.userData.token doesn't exist (which means its the first utterance) OR a new token has been sent. In both cases I trigger firstRun to update session.userData.token and proceed to trigger any dialog that matched with the LUIS intent of the utterance. But whenever firstRun is triggered, none of the other dialogs are triggered. It would be ideal to have a simpler mechanism to do this i suppose.
Thanks
It sounds like you're trying to have a pass-through intent handler that would trigger before the message is routed to the actual handlers. Middleware would be the best place to handle your token refresh logic, but working with session in your middleware isn't easy. This blog post of mine explains why - http://www.pveller.com/smarter-conversations-part-4-transcript/.
Your best bet is the routing event, I believe. It's synchronous via events and you are given the session object. You should be able to validate and refresh your token as needed before the message reaches the proper intent handler destination.
bot.on('routing', function (session) {
if (!session.userData.token) {
// receive new token
}
});
Unlike middleware though, you are not given the next callback to continue the chain, so you will have to make sure you fetch the token synchronously. The blog post I mentioned previously explains this part as well.
Collaboration Mode:
What is the best way to propagate changes from Client #1's canvas to client #2's canvas? Here's how I capture and send events to Socket.io.
$scope.canvas.on('object:modified',function(e) {
Socket.whiteboardMessage({
eventId:'object:modified',
event:e.target.toJSON()
});
});
On the receiver side, this code works splendidly for adding new objects to the screen, but I could not find documentation on how to select and update an existing object in the canvas.
fabric.util.enlivenObjects([e.event], function(objects) {
objects.forEach(function(o) {
$scope.canvas.add(o);
});
});
I did see that Objects have individual setters and one bulk setter, but I could not figure out how to select an existing object based on the event data.
Ideally, the flow would be:
Receive event with targeted object data.
Select the existing object in the canvas.
Perform bulk update.
Refresh canvas.
Hopefully someone with Fabric.JS experience can help me figure this out. Thanks!
UPDATED ANSWER - Thanks AJM!
AJM was correct in suggesting a unique ID for every newly created element. I was also able to create a new ID for all newly created drawing paths as well. Here's how it worked:
var t = new fabric.IText('Edit me...', {
left: $scope.width/2-100,
top: $scope.height/2-50
});
t.set('id',randomHash());
$scope.canvas.add(t);
I also captured newly created paths and added an id:
$scope.canvas.on('path:created',function(e) {
if (e.target.id === undefined) {
e.target.set('id',randomHash());
}
});
However, I encountered an issue where my ID was visible in console log, but it was not present after executing object.toJSON(). This is because Fabric has its own serialization method which trims down the data to a standardized list of properties. To include additional properties, I had to serialize the data for transport like so:
$scope.canvas.on('object:modified',function(e) {
Socket.whiteboardMessage({
object:e.target.toJSON(['id']) // includes "id" in output.
})
});
Now each object has a unique ID with which to perform updates. On the receiver's side of my code, I added AJM's object-lookup function. I placed this code in the "startup" section of my application so it would only run once (after Fabric.js is loaded, of course!)
fabric.Canvas.prototype.getObjectById = function (id) {
var objs = this.getObjects();
for (var i = 0, len = objs.length; i < len; i++) {
if (objs[i].id == id) {
return objs[i];
}
}
return 0;
};
Now, whenever a new socket.io message is received with whiteboard data, I am able to find it in the canvas via this line:
var obj = $scope.canvas.getObjectById(e.object.id);
Inserting and removing are easy, but for updating, this final piece of code did the trick:
obj.set(e.object); // Updates properties
$scope.canvas.renderAll(); // Redraws canvas
$scope.canvas.calcOffset(); // Updates offsets
All of this required me to handle the following events. Paths are treated as objects once they're created.
$scope.canvas.on('object:added',function(e) { });
$scope.canvas.on('object:modified',function(e) { });
$scope.canvas.on('object:moving',function(e) { });
$scope.canvas.on('object:removed',function(e) { });
$scope.canvas.on('path:created',function(e) { });
I did something similar involving a single shared canvas between multiple users and ran into this exact issue.
To solve this problem, I added unique IDs (using a javascript UUID generator) to each object added to the canvas (in my case, there could be many users working on a canvas at a time, thus I needed to avoid collisions; in your case, something simpler could work).
Fabric objects' set method will let you add an arbitrary property, like an id: o.set('id', yourid). Before you add() a new Fabric object to your canvas (and send that across the wire), tack on an ID property. Now, you'll have a unique key by which you can pick out individual objects.
From there, you'd need a method to retrieve an object by ID. Here's what I used:
fabric.Canvas.prototype.getObjectById = function (id) {
var objs = this.getObjects();
for (var i = 0, len = objs.length; i < len; i++) {
if (objs[i].id == id) {
return objs[i];
}
}
return null;
};
When you receive data from your socket, grab that object from the canvas by ID and mutate it using the appropriate set methods or copying properties wholesale (or, if getObjectById returns null, create it).
I have and application written in NodeJS with Express and am attempting to use EventEmitter to create a kind of plugin architecture with plugins hooking into the main code by listening to emitted events.
My problem comes when a plugin function makes an async request (to get data from mongo in this case) this causes the plugin code to finish and return control back to the original emitter which will then complete execution, before the async request in the plugin code finishes.
E.g:
Main App:
// We want to modify the request object in the plugin
self.emit('plugin-listener', request);
Plugin:
// Plugin function listening to 'plugin-listener', 'request' is an arg
console.log(request);
// Call to DB (async)
this.getFromMongo(some_data, function(response){
// this may not get called until the plugin function has finished!
}
My reason for avoiding a callback function back to the main code from the 'getFromMongo' function is that there may be 0 or many plugins listening to the event. Ideally I want some way to wait for the DB stuff to finish before returning control to the main app
Many Thanks
Using the EventEmitter for plugin/middleware management is not ideal, because you cannot ensure that the listeners are executed sequentially, if they have asynchroneous code. This especially is a problem when these listeners interact with each other or the same data.
That's why i.e. connect/express middleware functions are stored in an array and executed one after the other, instead of using an EventEmitter; They each need to call a next(); function when they are done doing their task.
You can't mix asynchronous calls with synchronous behavior. If you're going to stick with event emitter (which may not be ideal for you as Klovadis pointed out), you'll need to have your plugin emit an event that triggers a function in the main app which contains the code that you want to 'wait' to execute. You would also have to in turn keep track of all the plugin calls you made that you are waiting for event calls for so that your main code doesn't run until all the plugin calls have finished their MongoDB callbacks.
var callList = ['pluginArgs1', 'pluginArgs2', 'pluginArgs3'];
for (var i = 0; i < callList.length; i++){
self.emit('plugin-listener', callList[i], i);
}
self.on('plugin-callback', function(i){
callList.splice(i, 1);
if (callList.length < 1){
//we're done, do something
}
});
Had the same kind of decision to make about some events that I sometime need to wait for before returning the response to the client and sometimes not (when not in an HTTP request context).
The easiest way for me was to add a callback as the last argument of the event.
Stuff.emit('do_some_stuff', data, data2, callback);
In the event check if there is a callback:
Stuff.on('do_some_stuff', function(data, data2, callback) {
// stuff to do
// ...
if (typeof callback === "function") return callback(err, result);
});
I know that mixing event and callbacks can be messy but that work fine for what I need.
The other solution I see is the one proposed by #redben: add an emit function at the end of the event. The problem when in a HTTP context is that you need unique keys so your events don't mess up if they do different stuff per user.
Haven't tried it myself but you could use a property in the event's data object as an array of functions to execute by the code that emitted the event :
Listeners
foo.on('your-event', function(data) {
console.log(data);
// Then add the asynchronous code to a callbacks array
// in the event data object
data.callbacks.push(function(next) {
getFromMongo(some_data, function(err, result) { next(err) }
}
});
Emitter
self.emit('your-event', data);
// listeners have modified data object,
// some might have added callback to data.callbacks
// (suppose you use async)
async.series(data.callbacks);
This seems quite dangerous, but I have to do it anyway...
const ee = new EventEmitter();
if (ee.listeners("async-event").length > 0) {
await new Promise((resolve) => {
ee.emit("async-event", data1, data2, resolve);
});
}
Otherwise, just emit the event back-and-forth.