I notice that in GWT's DOMStandardImpl.java, events are sunk by setting the onevent properties on the element to refer to an event dispatcher. e.g.,
protected native void sinkEventsImpl(Element elem, int bits) /*-{
...
if (chMask & 0x00001) elem.onclick = (bits & 0x00001) ?
#com.google.gwt.user.client.impl.DOMImplStandard::dispatchEvent : null;
...
}-*/;
The problem with that is that this may be a source of incompatibility with existing JavaScript code and other JS frameworks. Why do they use the elem.onevent=func method as opposed to the preferred
elem.addEventListener('event',func,false);
which would allow the developer to add multiple event listeners to the element?
Thanks.
Troy
GWT's DOMImpl have been (at least by the time they were written) benchmarked to use the fastest option depending on the browser; this is why DOMImplStandard uses event handler properties (and why DOMImplOpera doesn't have the if chMask & 0x00001) part, because assigning the onxxx property is so fast there).
As for the potential incompatibility with other frameworks:
GWT is built around the idea that it owns the elements it creates, so if you have a third-party JS lib that messes up with it, it's your fault (for trying to use both of them at the same time)
it could still be an issue with elements that you wrap() inside a widget (that also includes RootPanel.get(String)), but then again, you're held responsible if things don't play well together.
more importantly, using event handler properties in GWT won't be an issue if the other JS libs don't use them, and isntead use addEventListener (or IE's attachEvent). So if you do have an incompatibility/conflict, first blame yourself (see above), then blame both GWT and your JS lib.
In brief: it's a non-issue.
Related
I am looking into using one or other method and in particular method 2. Can anyone tell me the advantages and disadavantages of using the 2nd method over the 1st.
Method 1 - ViewModel.cs
PTBtnCmd = new Command<Templates.WideButton>((btn) =>
MessagingCenter.Send<CFSPageViewModel, Templates.WideButton>(
this, "PTBtn", btn));
Method 1 - MyPage.xaml.cs (SetLang etc.. methods in this file )
MessagingCenter.Subscribe<CFSPageViewModel, Templates.WideButton>(
this, "PTBtn", (s, btn) =>
{
Utils.SetState(btn.Text, vm.PT);
SetLangVisible(btn.Text);
SetLangSelected(btn.Text);
vm.CFSMessage = Settings.cfs.TextLongDescription();
});
or
Method 2 - ViewModel.cs (SetLang etc.. methods in this file )
PTBtnCmd = new Command<string>(SetMode);
private void SetMode(string btnText)
{
Utils.SetState(btnText, PT);
SetLangVisible(btnText);
SetLangSelected(btnText);
CFSMessage = Settings.cfs.TextLongDescription();
}
Would also like to hear comments on the idea of adding methods into the ViewModel.cs code. Would it be better for these to be in another file?
The MessagingCenter
helps you keep your code decoupled. Sometimes you will find yourself in a position
that requires you create a reference between certain code, but by doing so, you have to
compromise on reusability and maintainability.
Try to use it as a last resort; usually there is
another way to achieve your desired functionality. While sending a message can be very
powerful, using it too much can really eat into your readability.
A use case example for MessagingCenter would be a case where you need to update values in multiple
parts of your app. You can subscribe to a message from multiple places and thus execute
code in multiple places when a message is received. Another use case could be if some
background process is done, it can send a message and you can then inform the user in
your UI.
I would not use the messaging in the VM layer because your VM layer can then only be used in Xamarin.Forms. Some Mvm frameworks, like mvvmlight, offer a messaging capability. I would opt for that instead as you could then reuse your VMs in Wpf, Uwp or other UI frameworks other than XF.
Also i wouldn't use the messaging like you have. If probably just use databinding and raise PropertyChanged events in the VM which the view can react to.
To pass data between VMs I'd suggest navigation params or rethinking how you are using this data in general (use some sort of service or dumb down the UI depending on how "fat" your client app must be).
Messaging center as #Andy mentioned would cause reusability issues but this does not mean that you cannot use it. My approach is wrapping it in a separate service and using it in implementation. This will let you to do two things: creating more convenient or better way to use it in accordance to your use case and an option to swap out the implementation of your messenger to any other pub-sub library (or your own impl.) if you will need to use these VMs in WPF project.
Of course using something more universal across platforms would be a great option too but it also depends on how much you are "allowed"/can use third party stuff. At least with MAUI this causes some problems, but this is for another topic.
I am in a highly dynamic context, heavily using dynamic instantiation of components from sources. Naturally I am concerned with the overhead from having to parse those sources each and every time an object is dynamically created. When the situation allows it, I am using manual caching:
readonly property var componentCache: new Object
function create(type) {
var comp = componentCache[type]
if (comp === undefined) { // "cache miss"
comp = Qt.createComponent(type)
if (comp.status !== Component.Ready) {
console.log("Component creation failed: " + comp.errorString())
return null
} else {
componentCache[type] = comp
}
}
return comp.createObject()
}
Except that this is not always applicable, for example, using a Loader with a component which needs to specify object properties using the setSource(source, properties) function. In this scenario it is not possible to use a manually cached Component as the function only takes an url. The doc does vaguely mention "caching", but it is not exactly clear whether this cache is QML engine wide for the component from that source or more likely - just for that particular Loader.
If the active property is false at the time when this function is
called, the given source component will not be loaded but the source
and initial properties will be cached. When the loader is made active,
an instance of the source component will be created with the initial
properties set.
The question is how to deal with this issue, and is it even necessary? Maybe Qt does component from source caching by default? Caching certainly would make sense in terms of avoiding redundant source loading (from disk or worse - network), parsing and component preparation, but its effects will only be prominent in the case of excessive dynamic instantiation, and the "typical" QML dynamic object creation scenarios usually involve a one-time object, in which case the caching would be useless memory overhead. Caching also doesn't make sense in the context of the possibility that the source may change in between the instantiations.
So since I don't have the time to dig through the mess that is the private implementations behind Qt APIs, if I had to guess, I'd say that component from source caching is not likely, but as a mere guess, it may as well be wrong.
Not an answer per se, I tripped into the question of component caching yesterday – and was surprised to discover that Qt appears to cache components. At least in creating dynamic components, the log statements related to createComponent only appear once in my test app. I've searched around and haven't seen any specific info in the docs about caching. I did come across a couple of interesting methods in the QQMLEngine Class. Then I came across the release notes for Qt 5.4
The component returned by Qt.createComponent() is no longer parented
to the engine. Be sure to hold a reference, or provide a parent.
So? If a parent is provided, it is cached (?) That appears to be the case. (and for 5.5+ ?) If you want to manage it yourself, don't provide a parent and retain the reference. (?)
QQmlEngine Class
QQmlEngine::clearComponentCache() Clears the engine's
internal component cache. This function causes the property
metadata of all components previously loaded by the engine to be
destroyed. All previously loaded components and the property bindings
for all extant objects created from those components will cease to
function. This function returns the engine to a state where it
does not contain any loaded component data. This may be useful in
order to reload a smaller subset of the previous component set, or to
load a new version of a previously loaded component. Once the
component cache has been cleared, components must be loaded before any
new objects can be created.
void QQmlEngine::trimComponentCache()
Trims the engine's internal component cache.
This function causes the property metadata of any loaded components
which are not currently in use to be destroyed.
A component is considered to be in use if there are any extant
instances of the component itself, any instances of other components
that use the component, or any objects instantiated by any of those
components.
After some toying with the code, it looks like caching also takes place in the "problematic case" of Loader.setSource().
Running the example code from this question, I found out that regular component creation fails to expand a tree deeper than 10 nodes, because of Qt's hard-coded limit:
// Do not create infinite recursion in object creation
static const int maxCreationDepth = 10;
This causes component instantiation to abort if there are more than nested 10 component instantiations in a regular scenario, product incorrect tree and generating a warning.
This doesn't happen when setSource() is used, the obvious reason for this would be that the component is cached and thus no component instantiation takes place.
Can i check whether if my javascript ext-lib such as fancybox plugin have already existed in my page(don't matter its version)?
I use liferay portlet, it can be place two same portlet in one page, I am already have some script confict now. (Liferay AUI script on layout configuration panel, slider in some assetPublisher ...)
If you are testing for native javascript functions (i.e. built–in or created using javascript), then:
if (typeof foo == 'function')
will always work, regardless of which library you are using. e.g. to test if jQuery is available:
if (typeof jQuery == 'function')
I would not trust the jQuery isFunction method for host objects, and if you aren't testing host objects, typeof is completely reliable.
Edit
Oh, I should add that if you are testing methods of host objects, there are many aspects to consider. The following is sufficient in the vast majority of cases:
if (hostObject && hostObject.method) {
// call hostObject.method
}
It’s worth noting that host objects aren’t required to comply with ECMA-262 (though most modern implementations do to a large extent, at least for DOM objects). There are a number of implementations in use that have host objects that, when tested with typeof will return “unknown” or similar, some older ones even threw errors.
Some host objects throw errors if tested with object.prototype.toString.call(hostObject) (which jQuery's isFunction function uses), so unless you are going to implement a robust isHostMethod function, the above will do in the vast majority of cases.
To check if a generic JS lib is loaded (aka not a jQuery plugin) test one of its functions:
if ($.isFunction(pluginFunction)) {
// Code to run if generic library is loaded
}
To check if a jQuery plugin is loaded:
if (jQuery().pluginName) {
// Code to run if plugin is loaded
}
Hope this helps!
Backbone JS highly recommends you use jQuery. However, it doesn't do things very jQuery. For example, jQuery removes the necessity of the new operator, backbone makes heavy use of it.
On another note, I'm looking for a framework that is based more around prototypal inheritance than classical inheritance (new). jQuery doesn't fall under this category, this is just an architecture style I am leaning towards.
Are there any frameworks that use prototypal inheritance, or is it roll your own bridge pattern?
Backbone and jQuery solve different problems... Backbone essentially gives you a structure in order to make Javascript heavy apps... It gives you Models, Collections, Views and Controllers (although, based on only a day of playing with it, it feels to me like controllers are used for routing, and the views are kind of like a classic controller)
Backbone has a dependency on jQuery (or Zepto if you're that way inclined) to help it do things like AJAX requests.
You are correct - it is not very jQuery like, but it is providing you with something very different...
UPDATE
As of version 0.5.0 backbone have renamed controllers to routers, which should make things a little more obvious for folks coming from MS MVC / Rails etc...
"We've taken the opportunity to clarify some naming with the 0.5.0 release. Controller is now Router"
http://documentcloud.github.com/backbone/#Router
I fully agree with the answer provided by Paul and also would like to restate that your question is ambiguous in its essentials.
Jquery is highly dom-centric and provides you excellent facilities for operating upon the DOM. Be it changing styles, loading remote content to some portion of document, responding to browser events ... in (almost) everything the core focus is upon the DOM. For this kind of functionality, where you are operating upon document content, prototypal inheritance and more significantly the style of accessing widgets through the DOM (check out the APIs of JQueryUI) works rather well. If you identify widgets as javascript objects, then you have to keep track of the objects as well ... which you in case of the programming style followed by JQueryUI etc., you dont have to, because you can access any widget present in the DOM by navigating through the DOM structure or simply through its id (which essentially acts as a global identifier for the element).
backbone.js is altogether built for a different purpose. The very introduction clearly states that it is built upon the underlying philosophy that tying your data to the DOM is bad.
When you build an application that structured as per backbone.js conventions, you are essentially always concentrated on javascript objects which may be somehow linked to the DOM.
You are defining models that interface with server data sources, models which trigger events when the data contents are manipulated, collections which help you manage large data sets ... whatever, you are always operating on javascript objects which are not hardwired into the Document structure. For such scenario, it is more usual to think in terms of traditional object oriented model.
Once you have an object, the workflow is not much different from what you are habituated with using jQuery because backbone too, just like jQuery, advocates for the observer pattern.
So, in the same manner that you can bind event handlers to DOM elements with JQuery, you attach event handlers to custom events dispatched by models, collectors etc. So the two amalgamate well.
As far as other frameworks are concerned, you might want to checkout Knockout which provides data bindings and observables etc. and does not require you to use new keyword to create instances, rather instances are created by calling functions in ko namespace which might appeal to your tastes. KO has extensive documentation and code examples, which you can explore to decide whether it suites your tastes. I can not comment more on KO because I have limited knowledge about it, but as far as backbone.js is concerned I would very strongly recommend you not to dismiss the framework just because you dont like the way some things are implemented. It does elegantly and robustly what it is supposed to do and maintains an incredibly small footprint.
I think you suffer from a misunderstanding of prototypal inheritance and what you actually want in a framework. If you're modifying prototypes but never using the new keyword, then you're never creating new objects. If you're looking for a framework to abstract away object creation, that's another topic, though you shouldn't be afraid to use the new keyword; It's a big part of JavaScript.
You can actually use the new syntax with jQuery, and when you don't use it, jQuery is actually recalling itself again with the new syntax along with the arguments you passed to it. This is pure syntactic sugar and makes very little difference about anything.
"On another note, I'm looking for a framework that is based more around prototypal inheritance than classical inheritance (new)."
Backbone is built on prototypal inheritance through the method extend that is built into all models, collections, and views. It's quite problem free and easier to use and adds things like super - a hook into the parent's prototype - something that can be more difficult to do with just plain JS prototyping.
Backbone.js brings a codding standard for ajax driven web-apps that don't have to refresh the page in order to serve data.
jQuery brings a set of tools to help you get basic things done without worrying about the browser you're running the code in.
They have nothing in common. It's like comparing a hammer (jQuery) with a toolbox (backbone.js). The hammer is just a part of the toolbox, not the other way around.
So, yes! Backbone.js is not like jQuery.
"I'm looking for a framework that is based more around prototypal
inheritance than classical inheritance the new operator".
There is no 'classical' inheritance on Javascript. Actually there is no other standard way to prototype than using 'new' maybe jQuery just provides a method who does the 'new' inside.
var jQuery = function( selector, context ) {
// The jQuery object is actually just the init constructor 'enhanced'
return new jQuery.fn.init( selector, context, rootjQuery );
},
How do I expose events to my plugin users?
I know that I should use:
$('#myPluginDiv').trigger('eventName', ["foo", "bar"]);
to trigger the event but I'm looking for best practices describing how to declare and invoke events in plugins.
I think you can inspect some of the most used plugins and make your own assumptions. We have no standards on this, just code convention.
Colorbox (source: https://github.com/jackmoore/colorbox/blob/master/jquery.colorbox.js) defines a prefix and some constants for the event names. It also have a function for triggering and running the callbacks.
jQuery UI (source: https://github.com/jquery/jquery-ui/blob/master/ui/jquery.ui.widget.js) also have a common function on the widget class for triggering events (usage: https://github.com/jquery/jquery-ui/blob/master/ui/jquery.ui.dialog.js), but you can see that the events are hard coded on the middle of the source, instead of constants on the top like on Colorbox.
I personally think, and do it in my own plugins, that creating constants is much better if you have a lot of events to trigger, but its not necessary if you will fire only 2 or 3 events.
A helper function is a must have and should be part of your template.
The event names I use and see around all follow the standard CamelCase e.g. beforeClose.
Some advocate the use of a prefix for events like on Colorbox's cbox_open or even click.myPlugin (see: http://api.jquery.com/on/#event-names)
Conclusion: try to follow best practices and conventions for programming in general and watch for the better examples out there.
in plugin create object litereal like
var plugin = {
show:function(){
// code for show()
}
};