Dojo browser performance, in terms of loading module files - performance

Is Dojo's load-as-you-need it structure actually a performance improvement? For me, at least?
My company's website is going to switch to IBM Websphere, which primarily uses Dojo. My company's very concerned with page performance, mostly in terms of "seconds to page load". As a result of that, the directive we've been given is "minimize hits to the server", so with our current website we aggregate all our .js files before the promotion to production.
But that directive is basically becoming Law, now, so if I were to argue against it, I would need a very good rationale for it. I've been unable to find anything in favor of the load-as-you-go method except "it's a good idea" and "loads only when you need it" (the latter is really only based on the former, as far as I can tell).
And then, if I were to flatten everything out into a single file, I wouldn't be able to use dojo.require() statements at all, would I? (the idea being, if I could have the development side split by module to make the organization more rational, but then have the production version a single file, but then dojo.require() would no longer make sense there, and then I have an increasingly complex situation where I would need the build to do some invasive things to the javascript to package for production.)
Please resist the "it depends" answer. The best practices docs I've seen (Yahoo, Google, etc) pretty much just say "reduce page loads" and don't have much "it depends" about it. But Dojo's framework seems so definitive about its approach, I'm wondering if there's a more persuasive argument for it.

Dojo actually combines the two approaches. In development mode, you have many files that use define (dojo.require is obsolete) to load other modules dynamically. This is very good for abstraction and development.
Then there is "production mode", where you compile all those small files into one or more (aka. layers) minified Javascript files with the dojo build system. This reduces hits to the server while still maintaining all the modularity. With the layer approach, application data that is needed later will automatically be loaded from a separate file.
See:
http://dojotoolkit.org/blog/learn-more-about-amd
http://dojotoolkit.org/reference-guide/1.7/build/

Related

XPage Osgi plug in development

background
I have designed many tools in the past year or so that is designed to help me program for XPages. These tools include primarily helper java classes, extended logging (making use of OpenLogger and my own stuff), and a few other things that I personally feel I cannot work without. It has been discussed with my employer, and we feel that it might be a good idea to start publishing these items to openNTF. Since these tools are made up of about 3 .nsfs, all designed to use the same java code, key javascript classes, css, and even a custom control or two, I would like to consolidate key items into a plug-in that can be installed at the server and client level. I want to do this consolidation before I even think about publishing any of the work I've done so far. It would just be far too much work to maintain, not just for me, but for potential users. I have not really found any information on how to do such a thing in google searches. I also have to make sure that I am able to make use of the ExtLib libraries, openNTF Domino API, and the Notes API.
my questions
How does one best go about designing such plug-ins? Must a designer
use eclipse, or is this it possible to do this directly in the Notes
Designer?
How does a designer best go about keeping a server and client up to date while designing and updating the plug-in code? Is this why GitHub is often used?
Where is the best place to get material to get started in this direction? I sort of feel lost in the woods, knowing I need to head north, but not having a compass for that first step.
Thank you very much for your input.
In my experience, I found that diving into plug-in development is a huge PITA until you get used to it, but it's definitely worth it overall.
As for whether you can use Designer for plugin development: yes, but you will likely eventually want to not do so. I started out by using Designer for this sort of thing for a while, presumably with the same sentiment as you: why bother installing another instance of Eclipse when I'm already sitting in one all day? However, between Designer's age (it's roughly equivalent to, I think, Eclipse 3.4), oddities when it comes to working sets between the "Applications" and "Project Explorer" views, and, in my case, my desire to use a Mac app, I ended up switching.
There are two major starting points: the XSP Starter Kit (http://www.openntf.org/internal/home.nsf/project.xsp?name=XSP%20Starter%20Kit) and Niklas Heidloff's video on setting up Eclipse for XPages development (http://www.openntf.org/main.nsf/blog.xsp?permaLink=NHEF-8RVB5H). The latter mentions the XPages SDK (http://www.openntf.org/internal/home.nsf/project.xsp?name=XPages%20SDK%20for%20Eclipse%20RCP), which is also useful. In my setup, I found the video largely useful, but some aspects either difficult to find (IBM's downloads are shifting sands) or optional (debugging, which will depend on whether or not you're using Eclipse on Windows).
Those resources should generally get you set up. The main thing to worry about when setting up your Eclipse environment will be making sure your Plug-In Execution Environment is properly done. If you're following the SDK setup instructions, that SHOULD get you where you need to be.
The next thing to know about is the way plugins are structured. Each plugin you want to install in Designer or Domino will also be paired with a feature project (a feature can house several plugins), and potentially an update site - the last one is optional if you just want to import the features into an Update Site NSF. That's how I often do my normal plugin development: export the paired feature to a directory and then import the feature into the server's Update Site NSF and then install in Designer from there using Application -> Install. You can also set things up so that you deploy into the server's plugin/feature directories instead of taking the step of installing into an update site if you'd prefer. GitHub doesn't really come into play for this aspect - it's more about sharing/collaborating with your code and also having a remote storage location for your git repositories (which I highly advise).
And as for the "lost in the woods" feeling: yep, you'll have that for a good while. There are lots of moving parts and esoteric concepts to get a hold of all at once. If you mostly follow the above links and then start with some basics from the XSP Starter Kit (which is itself a plugin project that you can pair with a feature) - say, printing text in the Activator class and making an implicit global variable just to make sure it works - that should help get your feet wet.
It's best done in Eclipse. You can debug your code running on the server from there, as well as run it directly from there. The editors are also more up-to-date. You want:
Eclipse for RCP and RAP developers
XPages SDK for Eclipse RCP (from OpenNTF)
XPages Debug Plugin (from OpenNTF - basically allows you to load the plugins to the Domino server dynamically, rather than exporting to an Update Site all the time)
XSP Starter Kit on OpenNTF is a good starting point for a plugin. There are various references to the library id, which has to be unique for your plugin. Basically, references to org.openntf.xsp.starter need changing to whatever you want to call your plugin. You're also best advised to remove what you don't need. I tend to work in a copy of the Starter, remove stuff, build and if there are errors with required classes (Activator.java obviously will be required and some others), then paste them back in from the Starter.
XPages OpenLog Logger is a good cross-reference, that was built from XPages Starter Kit. It's pretty much stripped down and you'll be able to see what had to be changed. A lot of the elements of the XSP Starter Kit correspond to Java classes you'll probably be familiar with from your XPages Java development.
GitHub etc tend to be used as source control, which is useful for working out what's changed from time to time.

Build an app with marionettejs with requirejs?

I have used backbone boilerplate on the past
https://github.com/backbone-boilerplate/backbone-boilerplate
I want to use marionette on my next project and I have found this
https://github.com/BoilerplateMVC/Marionette-Require-Boilerplate
My question is if it's a good idea to go with the marionette boilerplate or start form scratch.
As an aside, I'd like to suggest you give Yeoman a shot for scaffolding your first Marionette app. Yeoman works via what are called "generators", and provide much more than the the above Boilerplate MVC can offer you (Chai and Sinon for testing, Bower for client-side package management, etc...). Plus, Addy Osmani, who runs backbone-boilerplates is one of the heads of the project. Check out generator-marionette here.
I haven't used BoilerPlate, but glancing through it, it certainly seems like a valid approach to writing Marionette apps. If you're just getting started it will certainly help you see how the various pieces are supposed to be used. One gripe I've got is the folder structure. I prefer to break my applications down into modules, and then add models, collections, views, etc under each module. But this will certainly get you up and running quick, and there's nothing stopping you from customizing it to suit your needs.
I agree with others here: it is a useless limitation to imitate a folder structure that follows the 'old mvc model for server-side code'. You will remain more flexible further down the road if you think of your application strictly as completely self-containing modules, i.e. they contain their own controller/router/views/collections/templates etc. You can have a separate folder structure for shared code that is not a module, although anything can be made a module :)
Regarding boilerplate code and generators: i think in the beginning you should actually NOT do it, because you won't understand what you're doing. But that's just my personal opinion.

Camelcase, Underscore, etc. - how committed should you be?

Ok, I'm not trying to start a discussion on Camelcase vs Underscore here, it doesn't matter what you pick, just stick to your choice.
Rather, what I would like peoples opinion on, is how strict and committed you should be in your choice when importing third party libraries.
Especially in PHP there is a HUGE variety of coding styles, to the point where it's just damn near impossible to maintain one specific style throughout your codebase when you use third party libraries.
So what do you guys do? Modify those libraries to suit your conventions, write some sort of interpreting layer so that when you use those libraries the usage of them still follows your conventions? Do you just say "to hell with it" and mix it all together? Or is there some other ingenious solution that I haven't thought of (apart from simply not using libraries that don't follow your convention)?
In essence what I'm asking is; how do you manage to maintain a clean and consistent coding style when using third party libraries? Can it be done?
I say "to hell with it" and mix it all together. It can be somewhat annoying to have the mixed styles, but I don't think it's worth it to do a bunch of work to avoid this.

Looking for hierarchic feature/task tracker system

I use Trac to track my bugs related to my php web application. Tough, mainly I register feature request/tasks in trac. Do you find it a good practice, btw?
It's very handy, becouse I can track my tasks via Eclipse/mylyn, comment and fix them. I like trac very much, but I'm afraid of a lot of loosley coupuled tasks, that almost looks like bugs.
Is there a way (or other tracker system) to store my tasks hierarchically? I mean:
Store module (feature)
Add product (feature)
List product (feature)
Delete product (feature)
Unable to delete no name product (Bug)
Other Module.. etc.
Edit: Is there any other good practice where and how to store tasks hierarchically?
Fogbugz has tasks & subtasks, I haven't worked with this feature enough to see if it would help though. You could play around with the hosted eval version, though. (For my taste, the web interface feels to sluggish for me to use it - but I have that problem with lots of things.)
I recognize your problem as one of my own, however I'd prefer to use separate lists/hierarchies.
[update]
At the moment, I am using the starring and heavy search/filtering, and for "keeping my head on" with quickly incoming tasks or larger refactors, I use pen&paper for temporaries (A5 ringbound booklet) and ToDoList for semi-permanents.
JIRA also has this functionality + it's almost free ($10 for 10 users).
See here, and here.
And yes... I think this is good practice, just don't over exploit it.
And this is how it looks like:
You could stick with Trac and look for desired functionality in http://trac-hacks.org/
That looks like what you want (there might be others I just did a fast search):
http://trac-hacks.org/wiki/MasterTicketsPlugin
http://trac-hacks.org/wiki/TracTicketDepgraphPlugin
We are using a couple of plugins from http://trac-hacks.org/ with 0.11 and they work great.
Have a look at the Roundup Issue Tracker.
Years ago, before Trac came out, I wrote several user support and development trackers with it. It's very, very easy to customize the database schema and create new html page templates.
To manage hierarchic tasks, you basically define an IssueClass-based task class that way:
task = IssueClass(db, "task",
dependson=Multilink("task"), # here, you link tasks to other tasks
assignedto=Link("user"),
keyword=Multilink("keyword"),
priority=Link("priority"),
status=Link("status"))
There's a recipe in the Roundup documentation that shows you how to create "blockers" issues, meaning that you can't close an issue if one of its linked issues is not closed:
http://www.roundup-tracker.org/docs/customizing.html#blocking-issues-that-depend-on-other-issues
TargetProcess supports the hierachical structure you want. It's an agile Software Project Management Software, however it features highly customizable development processes and can therefore be used for Waterfall or Kanban/Lean processes also. The deepest hierachical structure you can have goes like this:
Program
Project
Release
Feature
User Story
Task
There is a free community edition which you can use for up to 5 users. TP has a lot more than just task tracking, it features Bug Tracking, Q&A, Help Desk, Time Tracking...
You mind look at GoPlan: http://goplanapp.com/.
It is fully functional project management web application, which provides to create a hierarchy of tasks. There is a free plan, so You can check it easily. You can have task tree with any depth.
Difference between this tool and Trac is that GoPlan is not directed to maintain source code, but a project itself, so You cannot close Your tickets from Eclipse. Unfortunately tasks do not have resolutions (tickets have, but they cannot be arranged in hierarchy), but I think it is not a kind of disadvantage that discourages from using this application.
You've probably already thought of this, but I'll put this in just in case. In Trac, I oftentimes organized tickets as sub-tasks, at least through convention by simply placing links to those tickets in the description of the master ticket. What's nice about this is that closed tickets are shown as crossed out, so you can get an idea of the status of the sub-tickets at a glance. OK, so it's not setting up a hierarchy, but it's a flexible system that also allows you to set up other relationships; for example you can also reference another ticket as a dependency or related issue.
Some of the requirement management tools out there support hierachies, e.g. CaliberRM from Borland. However, these are heavyweight and commercial. This only makes sense if you have some significant amount of information to handle.

Choosing between Impala and OSGi

I've been investigating OSGi for my company's software, but have recently been recommended to take a look at Impala. According to its web page, Impala is "a dynamic module framework for Java-based web applications, based on the Spring Framework."
At a glance, and looking at this blog post about the differences, the key differences I can see are that Impala is simpler than OSGi, does not manage versioning of third party components, and is far less widely used/known (I do not see a single question about it on Stack Overflow).
I wonder whether people who have direct experience with Impala and OSGi (i.e. those who have investigated it more deeply than reading blog posts and online docs), have any more insights into the practical differences between the two, and/or suggestions about what types of projects each one may be more or less suitable for.
Edit: It may also be interesting to include Springsource Slices into the comparison, although it is as yet an early prototype. At a glance, it appears to only work in DM Server.
Impala's approach to modularity is very weak when it comes to controlled sharing between modules. The problem is that Impala still follows the old J2EE-style hierarchical approach to classloading.
Anybody can write a module system that restricts visibility of classes across modules. The difficult part is how you reintroduce dependencies between modules such that specific classes and interfaces from one module can be seen by another module. In OSGi we do this by exporting and importing packages, so we have a non-hierarchical dependency graph.
In Impala, if you want to see the classes in another module, then your module must be a child or descendant of that module. That is, modules can only see their own classes and those of their ancestors. Now if you want to share some classes with your sibling module (e.g. a library that you both use) then you must move that library up into the classpath of your shared ancestor. In the worst case you have to move it right up to the root module. Now the library is visible to ALL other modules whether they want it or not! Indeed, if another module wanted to use a different version of the library they would be prevented from doing so.
If you simply have a copy of the library in each place where it is used, then you make it impossible for the modules using that library to communicate with each other. They will get ClassCastExceptions when they try to pass objects between each other.
A similar problem is inherent in J2EE if two web applications need to use the same library. Typically J2EE developers just copy the library, but this creates "silo" applications that cannot communicate with each other. It is simply not the way to build modular software.
Steven's points also seem pertinent. As far as I can tell, nobody is using Impala aside from its author.
In my eyes, there is no comparison. OSGi is a mature framework that's been around for 10 years and is the basis for the implementation of most of today's Java containers. OSGi has growing adoption, there are books available and, yes, people talk about it on Stack Overflow!
Impala hasn't even hit a stable release and appears to be a 1-man project, though he is asking for additional developers now.
So, it depends on your criteria. If you are investigating technology out of interest, then I don't see any issue writing stuff with Impala. If you are looking to base your company's future products on it, then I think that would be professionally negligent.

Resources