I understand that every time I listen for an event, a new StreamSubscription object is created
element.onMouseMove.listen
My question is: when will be this StreamSubscription object freed from the memory? Will it linger in there until I call it's cancel() method? Or is it enough to remove the element/object the StreamSubscription is listening to?
When exactly do I have to worry about memory leaks?
When the element is removed from the DOM and no 'active' variable has a reference to this element the garbage collector will release the memory of the element and the StreamSubscription.
With 'active' I'm talking about a variable that is held by a class that can't be garbage collected because the class itself is referenced.
If the element is kept in the DOM for a long time but the listeners come and go you should subscribe and unsubscribe when a listener is no longer interested in events.
subscribe:
StreamSubscription _moveSubscr = element.onMouseMove.listen(moveHandler);
unsubscribe
if(_moveSubscr != null) _moveSubscr.cancel;
Related
I am curious about the event parameter that gets passed to IOLockWakeup and IOLockSleep{Deadline}.
i understand that the event is an address that gets passed to both functions. i am assuming this address is used to essentially notify the thread.
so my question is: assuming i is an int, and we are using its address, how do these functions know when to sleep and wakeup?
is the assumption that:
when IOLockWakeup is called, that the contents of event are 0 (which it then changes to a non-zero value), and
when IOLockSleepDeadline is called, that the contents of the event were 0 at the time it was called, and it will stop sleeping because the contents are nonzero
and when we keep calling these functions (in a workloop context) are the contents of the event parameter automatically set to zero when iolocksleep* is called (and when it wakes up), since iolockwakeup presumably changs this to a nonzero value?
You'll notice that the event parameter is of type void*, not int*:
int IOLockSleep( IOLock * lock, void *event, UInt32 interType);
The event parameter is an arbitrary pointer, it’s never dereferenced, and it doesn’t matter what’s stored there, it's used purely for identification purposes: so for example don't pass NULL, because that's not a unique value.
IOLockSleep always suspends the running thread, and IOLockWakeup wakes up any thread that’s sleeping on that address. If no such thread is waiting, nothing at all happens. This is why you’ll usually want to pair the sleep/wakeup with some condition that’s protected by the lock, and send the wakeup while holding the lock - the thing to avoid is going to sleep after the wakeup was sent, in which case your sleeping thread might sleep forever.
So, you'll have some condition for deciding whether or not to sleep, and you'll update that condition before calling wakeup, while holding the lock:
IOLock* myLock;
bool shouldSleep;
…
// sleep code:
IOLockLock(myLock);
while (shouldSleep)
{
IOLockSleep(myLock, &shouldSleep, THREAD_UNINT);
}
IOLockUnlock(myLock);
…
// wakeup code:
IOLockLock(myLock);
shouldSleep = false;
IOLockWakeup(myLock, &shouldSleep, true /* or false, if we want to wake up multiple sleeping threads */);
IOLockUnlock(myLock);
Here, I've used the address of shouldSleep for the event parameter, but this could be anything, it's just convenient to use this because I know no other kext will be using that pointer, as no other kext has access to that variable.
Does anyone know how to write a custom Event Dispatcher based on the javafx.event package? I searched in Google & Co. but didn't find a nice example.
Have anyone a minimalistic example for me? That would be nice - I tried it a few times to understand it, but I failed.
The first thing to realize is how JavaFX dispatches events.
When an Event is fired it has an associated EventTarget. If the target was in the scene-graph then the path of the Event starts at the Window and goes down the scene-graph until the EventTarget is reached. The Event then goes back up the scene-graph until it reaches the Window again. This is known as the "capturing phase" and the "bubbling phase", respectively. Event filters are invoked during the capturing phase and event handlers are invoked during the bubbling phase. The EventHandlers set using the onXXX properties (e.g. onMouseClicked) are special types of handlers (i.e. not filters).
The EventDispatcher interface has the following method:
public Event dispatchEvent(Event event, EventDispatChain tail) { ... }
Here, the event is the Event being dispatched and the tail is the EventDispatchChain built, possibly recursively, by EventTarget.buildEventDispatchChain(EventDispatchChain). This will return null if the event is consumed during execution of the method.
The EventDispatchChain is a stack of EventDispatchers. Every time you call tail.dispatchEvent(event) you are essentially popping an EventDispatcher off the top and invoking it.
#Override
public Event dispatchEvent(Event event, EventDispatchChain tail) {
// First, dispatch event for the capturing phase
event = dispatchCapturingEvent(event);
if (event.isConsumed()) {
// One of the EventHandlers invoked in dispatchCapturingEvent
// consumed the event. Return null to indicate processing is complete
return null;
}
// Forward the event to the next EventDispatcher in the chain
// (i.e. on the stack). This will start the "capturing" on the
// next EventDispatcher. Returns null if event was consumed down
// the chain
event = tail.dispatchEvent(event);
// once we've reached this point the capturing phase has completed
if (event != null) {
// Not consumed from down the chain so we now handle the
// bubbling phase of the process
event = dispatchBubblingEvent(event);
if (event.isConsumed()) {
// One of the EventHandlers invoked in dispatchBubblingEvent
// consumed the event. Return null to indicate processing is complete
return null;
}
}
// return the event, or null if tail.dispatchEvent returned null
return event;
}
You're probably wondering where dispatchCapturingEvent and dispatchBubblingEvent are defined. These methods would be created by you and would invoke the appropriate EventHandlers. You might also be wondering why these methods return an Event. The reason is simple: During the processing of the Event these methods, along with tail.dispatchEvent, might alter the Event. Other than consume(), however, Event and its subclasses are basically immutable. This means any other alterations require the creation of a new Event. It is this new Event that should be used by the rest of the event-handling process.
The call to tail.dispatchEvent will virtually always return a new instance of the Event. This is due to the fact each EventDispatcher in the EventDispatchChain is normally associated with its own source (e.g. a Label or Window). When an EventHandler is being invoked the source of the Event must be the same Object that the EventHandler was registered to; if an EventHandler was registered with a Window then event.getSource() must return that Window during said EventHandler's execution. The way this is achieved is by using the Event.copyFor(Object,EventTarget) method.
Event oldEvent = ...;
Event newEvent = oldEvent.copyFor(newSource, oldEvent.getTarget());
As you can see, the EventTarget normally remains the same throughout. Also, subclasses may override copyFor while others, such as MouseEvent, may also define an overload.
How are the events actually dispatched to the EventHandlers though? Well, the internal implementation of EventDispatcher makes them a sort of "collection" of EventHandlers. Each EventDispatcher tracks all filters, handlers, and property-handlers (onXXX) that have been added to or removed from its associated source (e.g. Node). Your EventDispatcher doesn't have to do this but it will need a way to access wherever you do store the EventHandlers.
During the capturing phase the EventDispatcher invokes all the appropriate EventHandlers added via addEventFilter(EventType,EventHandler). Then, during the bubbling phase, the EventDispatcher invokes all the appropriate EventHandlers added via addEventHandler(EventType,EventHandler) or setOnXXX (e.g. setOnMouseClicked).
What do I mean by appropriate?
Every fired Event has an associated EventType. Said EventType may have a super EventType. For instance, the "inheritance" tree of MouseEvent.MOUSE_ENTERED is:
Event.ANY
InputEvent.ANY
MouseEvent.ANY
MouseEvent.MOUSE_ENTERED_TARGET
MouseEvent.MOUSE_ENTERED
When dispatching an Event you have to invoke all the EventHandlers registered for the Event's EventType and all the EventType's supertypes. Also, note that consuming an Event does not stop processing of that Event for the current phase of the current EventDispatcher but instead finishes invoking all appropriate EventHandlers. Once that phase for that EventDispatcher has completed, however, the processing of the Event stops.
Whatever mechanism you use to store the EventHandlers must be capable of concurrent modification by the same thread. This is because an EventHandler may add or remove another EventHandler to or from the same source for the same EventType for the same phase. If you stored them in a regular List this means the List may be modified while you're iterating it. A readily available example of an EventHandler that may remove itself is WeakEventHandler. A WeakEventHandler will attempt to remove itself if it is invoked after it has been "garbage collected".
Also, I don't know if this is required, but the internal implementation doesn't allow the same EventHandler to be registered more than once for the same source, EventType, and phase. Remember, though, that the EventHandlers added via addEventHandler and those added via setOnXXX are handled separately even though they are both invoked during the same phase (bubbling). Also, calling setOnXXX replaces any previous EventHandler set for the same property.
I've an actor where I want to store my mutable state inside a map.
Clients can send Get(key:String) and Put(key:String,value:String) messages to this actor.
I'm considering the following options.
Don't use futures inside the Actor's receive method. In this may have a negative impact on both latency as well as throughput in case I've a large number of gets/puts because all operations will be performed in order.
Use java.util.concurrent.ConcurrentHashMap and then invoke the gets and puts inside a Future.
Given that java.util.concurrent.ConcurrentHashMap is thread-safe and providers finer level of granularity, I was wondering if it is still a problem to close over the concurrentHashMap inside a Future created for each put and get.
I'm aware of the fact that it's a really bad idea to close over mutable state inside a Future inside an Actor but I'm still interested to know if in this particular case it is correct or not?
In general, java.util.concurrent.ConcurrentHashMap is made for concurrent use. As long as you don't try to transport the closure to another machine, and you think through the implications of it being used concurrently (e.g. if you read a value, use a function to modify it, and then put it back, do you want to use the replace(key, oldValue, newValue) method to make sure it hasn't changed while you were doing the processing?), it should be fine in Futures.
May be a little late, but still, in the book Reactive Web Applications, the author has indicated an indirection to this specific problem, using pipeTo as below.
def receive = {
case ComputeReach(tweetId) =>
fetchRetweets(tweetId, sender()) pipeTo self
case fetchedRetweets: FetchedRetweets =>
followerCountsByRetweet += fetchedRetweets -> List.empty
fetchedRetweets.retweets.foreach { rt =>
userFollowersCounter ! FetchFollowerCount(
fetchedRetweets.tweetId, rt.user
)
}
...
}
where followerCountsByRetweet is a mutable state of the actor. The result of fetchRetweets() which is a Future is piped to the same actor as a FetchedRetweets message, which then acts on the message on to modify the state of the acto., this will mitigate any concurrent operation on the state
I'm getting an error I really don't understand when reading or writing files using a PCIe block device driver. I seem to be hitting an issue in swiotlb_unmap_sg_attrs(), which appears to be doing a NULL dereference of the sg pointer, but I don't know where this is coming from, as the only scatterlist I use myself is allocated as part of the device info structure and persists as long as the driver does.
There is a stacktrace to go with the problem. It tends to vary a bit in exact details, but it always crashes in swiotlb_unmap_sq_attrs().
I think it's likely I have a locking issue, as I am not sure how to handle the locks around the IO functions. The lock is already held when the request function is called, I release it before the IO functions themselves are called, as they need an (MSI) IRQ to complete. The IRQ handler updates a "status" value, which the IO function is waiting for. When the IO function returns, I then take the lock back up and return to request queue handling.
The crash happens in blk_fetch_request() during the following:
if (!__blk_end_request(req, res, bytes)){
printk(KERN_ERR "%s next request\n", DRIVER_NAME);
req = blk_fetch_request(q);
} else {
printk(KERN_ERR "%s same request\n", DRIVER_NAME);
}
where bytes is updated by the request handler to be the total length of IO (summed length of each scatter-gather segment).
It turned out this was due to re-entrancy of the request function. Because I was unlocking in the middle to allow IRQs to come in, the request function could be called again, would take the lock (while the original request handler was waiting on IO) and then the wrong handler would get the IRQ and everything went south with stacks of failed IO.
The way I solved this was to set a "busy" flag at the start of the request function, clear it at the end and return immediately at the start of the function if this is set:
static void mydev_submit_req(struct request_queue *q){
struct mydevice *dev = q->queuedata;
// We are already processing a request
// so reentrant calls can take a hike
// They'll be back
if (dev->has_request)
return;
// We own the IO now, new requests need to wait
// Queue lock is held when this function is called
// so no need for an atomic set
dev->has_request = 1;
// Access request queue here, while queue lock is held
spin_unlock_irq(q->queue_lock);
// Perform IO here, with IRQs enabled
// You can't access the queue or request here, make sure
// you got the info you need out before you release the lock
spin_lock_irq(q->queue_lock);
// you can end the requests as needed here, with the lock held
// allow new requests to be processed after we return
dev->has_request = 0;
// lock is held when the function returns
}
I am still not sure why I consistently got the stacktrace from swiotlb_unmap_sq_attrs(), however.
If I have a class that listens to event emitters, is it wrong practice to bind on every instance?
function MyClass() {
emitter.on('ready', function() {
// do something
});
}
myclass = new MyClass();
If I call emitter.on() multiple times, it warns me.
(node) warning: possible EventEmitter memory leak detected. 11
listeners added. Use emitter.setMaxListeners() to increase limit.
Are event emitters meant to be bound only once per module, outside of class instances?
If this is wrong, then how do I access the class instance when events are triggered?
Thanks
The warning is that your attaching 11 event listeneres to the ready event on a single event emitter.
Generally when you listen to the same event multiple times on a single event emitter, it's likely that's a bug. For example say you have an http event emitter, if your listening on the request event 11 times that's probably a bug, you only want to listen and handle request once.
This is a debugging tool. You can get around this by doing
emitter.setMaxListeners(500); // or whatever you think is a sensible limit