I need to capture data from an instance generated by <template is="dom-repeat"> in Polymer (v1.2.4) and I am not sure what would be the safest way to do so considering the myriad of Shadow DOMs available (client browser might be polyfilled etc).
A simple example:
<template is="dom-repeat" items="[[myItems]]" id="collection">
<paper-card on-tap="handleTap">
(...)
What is the most reliable way to access the model data from the event handler?
1.
handleTap: function(e) {
var data = e.model.get('item.myData');
}
2.
handleTap: function(e) {
var data = this.$.collection
.modelForElement(Polymer.dom(e).localTarget)
.get('item.myData');
}
My concern is that the simplest (#1) option might be working as expected in my environment but can get buggy in other browsers.
And even in option #2, I am not confident if it is really necessary to normalize the event target (as recommended in the official Polymer guide on events) prior to passing it to modelForElement.
Both should work; but, it seems you should fire a custom event though over trying to inspect a child model. What ever component that has "item.myData" should fire a custom event on tap with "item.myData" as part of the event. Then you should setup a listener for that custom event.
See custom events for more details.
Related
TL/DR: React components have two kinds of code:
rendering code that draws the component, which depends on certain props that affect the component's visual appearance (call them "visual props"), and
event-handling code, e.g., onclick handlers, which depends on certain props that don't affect the component's visual appearance (call them "event props").
When event props change, they cause the component to re-render, even though its appearance doesn't change. The only thing changing is its future event-handling behavior.
What's best practice for removing event props to avoid unnecessary re-renders, while still allowing intelligent event handling?
Longer version
My question is subtly different from this question about how to give handlers to dumb React components; see below for explanation.
I have an application with many React components (hundreds to thousands of SVG elements; it's a CAD application).
There are many "edit modes" in this application (imagine a drawing program like Inkscape): depending on the edit mode, you might want a left-click to select an object, or drag to draw a selection outline rectangle, or do any number of different edits to the component that was clicked, depending on the edit mode.
In my original architecture, every one of these components had the current edit mode as a prop. Each component would use the mode prop to decide what to do in response to events such as clicks: different sorts of Redux actions are dispatched in response to clicks depending on the current mode. This means that every time the user switches the edit mode, every component gets re-rendered, even though none of them change visually. In a large design, it takes several seconds to re-render.
I've altered it to improve performance. Now, each component is dumber: none of them know the edit mode. But this means they don't know what to do in response to a click. In some cases, I solved this by having each dispatch a "dumber" action that says essentially "I was clicked". Middleware intercepts this action, looks up the edit mode in the Redux store, and dispatches an appropriate smart action based on the edit mode. In other cases, I simply let the component dispatch the original action (e.g., Select), even if that action may not be valid for the current edit mode, and similarly rely on the middleware to intercept and stop the action if it is invalid for the current edit mode.
This solution feels inelegant. Now, many more actions get dispatched, even though most of them are thrown away. It's also nothing like what I find in introductions/tutorials to middleware, which mostly talk about how it's good for async stuff (I don't need any of this to be asynchronous since these actions generally are not talking to the network or files) and side-effects such as logging (no side-effects here; I simply want a user interaction to trigger a normal Redux action to be dispatched).
I feel as though a better solution would be to access the Redux store as a global variable within event handling code. I know this is emphatically not safe to do with rendering code, since it breaks the rule "React views should be a deterministic function of their props and state". But it feels safer to do with event-handling code.
I realize it's common with "very dumb" React components to pass click handlers in as a prop (e.g., this stackoverflow answer), but I don't see this as a solution. If handler has the edit mode encoded in it as a bound value, then the handler itself needs to change when the edit mode changes, which, since the handler is a prop, requires re-rendering the component. So I think this issue I'm describing is orthogonal to whether the handler is passed into the component as a prop, or written specifically for the component.
So to summarize, there's three options I see:
Pass all data required for intelligent event handling as props. (causes unnecessary re-renders)
Have React components dispatch actions "promiscuously", and rely on middleware (which has access to the Redux store) to stop and/or transform the action if necessary. (As I implemented it, is harder to understand, and puts lots of unrelated application logic in one place, where it feels like it doesn't belong. Also makes for a messier Redux history of actions, making it harder to debug using Redux DevTools, and is not a pattern I've seen in any documentation/tutorial on Redux middleware.)
Allow event handler code (unlike rendering code) to access the Redux store as a global variable, to make intelligent decisions about what action to dispatch. (Seems okay, but scares me to use global variables in this way, and I'm worried that it could cause a problem I'm not seeing.)
Is there a fourth option I'm missing?
I have an idea for how to solve this in a way that feels close to the Redux spirit. (Though I still lean towards accessing global variables in event handlers to solve the problem.)
Redux has some notion of "action creators", which is a function that returns an action object. This always seemed like an unnecessary layer of abstraction to me. But perhaps a similar idea can be used here. (I use Dart, not Javascript, so the code below is Dart; hopefully the answer makes sense.)
The idea is to have a new type of action in called ActionCreator<A extends Action> (subtype of Action). An ActionCreator<A> is an object with a method of type
A create(AppState state)
In other words, it takes the whole AppState and returns an Action. This lets it do the necessary data lookups. As an object, it can contain fields that describe data gathered from the code (usually View event handler code) that instantiated it. For example, it could reference a Selectable to select. create() returns either null or some special value to indicate that the action should be thrown away.
For example, if we have a click handler, we'd dispatch an ActionCreator
class Select {
final Item item_clicked;
Select(this.item_clicked);
}
class ClickedAction implements ActionCreator<Select> {
final Item item_clicked;
ClickedAction(this.item_clicked);
Select create(AppState state) =>
state.ui_state.select_mode_is_on ? Select(this.item_clicked) : null;
}
// ...
onClick = (event) {
props.dispatch(ClickedAction(props.item));
}
And in middleware, once we have access to the full state, this can be turned into a concrete action, but only if it's legal. But the nice thing is that the next piece of code is generic and handles any such ActionCreator, so I wouldn't have to remember to keep editing this code whenever I create a new Action that needs to be "conditionally dispatched".
action_creator_middleware(Store<AppState> store, action, NextDispatcher next) {
if (action is ActionCreator) {
var maybe_action = action.create(store.state);
if (maybe_action != null) {
dispatch(maybe_action);
}
} else {
next(action);
}
}
The disadvantage of this is that it's still dispatching many more actions than we really need; most will get thrown away. It's a "cleaner" implementation of what I need, but I still think that for asynchronous event handlers, access the Redux store as a global variable is probably perfectly fine. I don't see in that any of the problems one would expect if the view code went outside of its React props and accessed global variables.
I don't really understand what delegate and promise are.
According to the docs -
delegate would bind a selector and event to some sort of wrapping container that can be used again at a later time for current and future items.
promise() would remap things back to when it was first bounded if everything newly loaded matches. Maybe I don't really understand this promise method.
What if the wrapper is still there, but the contents in the wrapper container have changed, and/or reloaded via Ajax? Why is it that the events are not triggering or working as it would the first time it is bound?
And yes, I have been to the docs page, I just don't understand their explanations completely.
I'm a bit confused by this question. I think this is because you are confused by promise and delegate. They are in fact completely unrelated features of jQuery. I'll explain each separately:
delegate
delegate is a feature of jQuery that was introduced in jQuery 1.4.2. (It is a nicer approach to the live feature that was added in jQuery 1.3). It solves a particular problem that comes with modifying the DOM, and particularly with AJAX calls.
When you bind an event handler, you bind it to a selection. So you might do $('.special').click(fn) to bind an event handler to all the members of the special class. You bind to those elements, so if you then remove the class from one of those elements, the event will still be triggered. Inversely, if you add the class to an element (or add a new element into the DOM), it won't have the event bound.
There is a feature of Javascript that mitigates this called "event bubbling". When an event is triggered, first the browser notifies the element where the event originated. Then it goes up the DOM tree, and notifies each ancestor element. This means that you can bind an event handler on an element high up the DOM tree, and events triggered on any child elements (even those that don't exist when the handler was bound).
delegate is jQuery's implementation of this. First, you select a parent element. Then you specify a selector – the handler will only be run if the originating element matches this selector. Then you specify an event type, such as click, submit, keydown, just as with bind. Then finally you specify the event handler.
$('#containingElement').delegate('a.special', 'click', function() {
alert('This will happen on all links with the special class');
});
promise
promise is another relatively recent addition to the jQuery featureset. It is part of the Deferred concept that was introduced in jQuery 1.5. (I think the similarity in sound between "deferred" and "delegate" is probably the source of confusion.) This is a way of abstracting away the complications of asynchronous code. The best example of this is with AJAX calls, as the object returned by $.ajax is a Deferred object. For instance:
$.ajax({
url: 'somepage.cgi',
data: {foo: 'bar'}
}).done(function() {
// this will be run when the AJAX request succeeds
}).fail(function() {
// this will be run when the AJAX request fails
}).always(function() {
// this will be run when the AJAX request is complete, whether it fails or succeeds
}).done(function() {
// this will also be run when the AJAX request succeeds
});
So it is in many ways the same as binding success handlers in the $.ajax call, except that you can bind more than one handler, and you can bind them after the initial call.
Another time when it would be useful to deal asynchronously is with animations. You can provide callbacks to functions, but it would be nicer to do this with similar syntax to the AJAX example I've provided above.
In jQuery 1.6, this functionality was made possible, and promise is part of this implementation. You call promise on a jQuery selection, and you'll get an object that you can bind event handlers to, when all the animations in the object have completed.
For instance:
$('div.special').fadeIn(5000).promise().then(function() {
// run when the animation succeeds
}).then(function() {
// also run when the animation succeeds
});
Again, this is similar in effect to traditional methods, but it adds flexibility. You can bind the handlers later, and you can bind more than one.
Summary
Basically, there is no significant relationship between delegate and promise, but they're both useful features in modern jQuery.
I need to do some action when render() method finished its work and appended all HTML elements to DOM.
How to subscribe to onRenderEnds event (there is no such event)?
Can I write my own event outside of slickgrid code and attach it to render() method?
There are some events "onScroll", "onViewportChanged" but they happened before render() finished (in some cases).
Update:
I write formatter for column:
formatter: function(row, cell, value, columnDef, dataContext){
return "<div class='operationList' data-my='" + myData + "'></div>";
}
When grid rendered (applying my formatter) i need to go through all ".operationList" divs and convert them to other constructions (based on data-my attribute). I need to replace ".operationList" divs with a complex structure with event handlers.
To answer on my own comment I've come up with the following hack. It may not be pretty but it seems to work.
Add the following line to the render() method just below renderRows(rendered);
function render() {
...
renderRows(rendered);
trigger(self.onRenderCompleted, {}); // fire when rendering is done
...
}
Add a new event handler to the public API:
"onRenderCompleted": new Slick.Event(),
Bind to the new event in your code:
grid.onRenderCompleted.subscribe(function() {
console.log('onRenderCompleted');
});
The basic answer is DON'T !
What you are proposing is a very bad design and goes against the core principles and architecture of SlickGrid.
You will end up doing a lot of redundant work and negating most of the performance advantages of SlickGrid. The grid will create and remove row DOM nodes on the fly as you scroll and do it either synchronously or asynchronously depending on which one suits best at the time. If you must have rich interactive content in the cells, use custom cell renderers and delegate all event handling to the grid level using its provided events such as onClick. If the content of the cell absolutely cannot be created using renderer, use async post-rendering - http://mleibman.github.com/SlickGrid/examples/example10-async-post-render.html. Even so, the grid content should not have any event listeners registered directly to the DOM nodes.
To address #magiconair's comment, you really shouldn't render a whole SELECT with all its options and event handlers until a cell switches into edit mode.
I have an app that has several different types of form elements which all post data to the server with jQuery AJAX.
What I want to do is:
Show a loader during AJAX transmission
Prevent the user from submitting twice+ (clicking a lot)
This is easy to do on a one off basis for every type of form on the site (comments, file upload, etc). But I'm curious to learn if that is a more global way to handle this?
Something that's smart enough to say:
If a form is submitting to the server and waiting for a response, ignore all submits
Show a DISABLED class on the submitted / clicked item
Show a loading class on the class="spinner" which is closest to the submit item clicked
What do you think? Good idea? Done before?
Take a look at the jQuery Global Ajax Event Handlers.
In a nutshell, you can set events which occur on each and every AJAX request, hence the name Global Event Handlers. There are a few different events, I'll use ajaxStart() and ajaxComplete() in my code sample below.
The idea is that we show the loading, disable the form & button on the ajaxStart() event, then reenable the form and hide the loading element inside the ajaxComplete() event.
var $form = $("form");
$form.ajaxStart(function() {
// show loading
$("#loading", this).show();
// Add class of disabled to form element
$(this).addClass("disabled");
// Disable button
$("input[type=submit]", this).attr("disabled", true);
});
And the AJAX complete event
$form.ajaxComplete(function() {
// hide loading
$("#loading", this).hide();
// Remove disabled class
$(this).removeClass("disabled");
// Re-enable button
$("input[type=submit]", this).removeAttr("disabled");
});
You might need to attach to the ajaxError event as well in case an AJAX call fails since you might need to clean up some of the elements. Test it out and see what happens on a failed AJAX request.
P.S. If you're calling $.ajax or similar ($.getJSON), you can still set these events via $.ajaxStart and $.ajaxComplete since the AJAX isn't attached to any element. You'll need to rearrange the code a little though since you won't have access to $(this).
I believe you have to do 2 for sure and 3 to improve usability of your app. It is better to keep backend dumb but if you have a security issue you should handle that too.
Working on a Wicket application that adds markup to the DOM after onLoad via Wicket's built-in AJAX for an auto-complete widget. We have an IE6 glitch that means I need to reposition the markup coming in, and I am trying to avoid tampering with the Wicket javascript... blah blah blah... here's what I'm trying to do:
New markup arrives in the DOM (I
don't have access to a callback)
Somehow I know this, so I fire my
code.
I tried this, hoping the new tags would trigger onLoad events:
$("selectorForNewMarkup").live("onLoad", function(){ //using jQuery 1.4.1
//my code
});
...but have become educated that onLoad only fires on the initial page load. Is there another event fired when elements are added to the DOM? Or another way to sense changes to the DOM?
Everything I've bumped into on similar issues with new markup additions, they have access to the callback function on .load() or similar, or they have a real javascript event to work with and live() works perfectly.
Is this a pipe dream?
.live() doesn't work like this, it's a common misconception. .live() creates an event handler at the DOM root and waits for events to bubble up to it. If the selector matches the event target, .live() will fire the bound event.
It doesn't look for new objects and bind events to them in any way, rather it just listens for a bubble, and doesn't care when that object was added to the DOM.
You need to fire whatever code is needed to run manually when your load operation completes.
What will this is the livequery plug-in, look specifically at the livequery( matchedFn ) call.
You can do something like this:
$('#myID').livequery(function() { $(this).offset()...stuff });
i guess this is what you are looking for http://ananthakumaran.github.com/2010/02/19/wicket-post-ajax-handling.html